SPATIAL

Spatial Perception Affords Technological Interaction And Learning.

by Rhett Forbes

Table of Contents

0. Preface: Thesis Statement

The increasing use of screen based devices is leading to the fragmentation of information, causing cognitive changes in comprehension, and the abstraction of space. Understanding the natural ways people navigate off-screen space informs interaction designers of methods that actualize onscreen space. This study will demonstrate that spatial experiences enable sensemaking and reduces the fragmentation encountered when navigating on-screen information.

Table of Contents

1. Introduction: a explanation of the thesis statement.



Understanding the natural ways people navigate off-screen space informs interaction designers of methods that actualize onscreen space. On-screen space is spatial in two ways: first there is the virtual space of online objects: text and media. Second, there is the cognitive domain, knowledge and intellectual skills (Tripp). A deeper understanding of spatial metaphors may provide a basis for interactive design theory to take advantage of the cognitive domain. This study will demonstrate that spatially interactive experiences enable sensemaking and reduce the fragmentation and displacement encountered when navigating on-screen information.

2. Screen Space: a metaphor.

In the early days of HCI (human computer interaction), we began using spatial metaphors to provide a cognitive transition. The terms 'desktop' and 'laptop' both capture representations of physical space. When the 'internet' arrived, the terms 'Cyberspace', 'Hyperspace', and the 'World Wide Web' were used to define a new and alien network. Kevin Lynch coined the term 'wayfinding' in his book The Image of the City to describe the concept of environmental legibility, the elements of the physical environment that allow us to navigate space (“Navigation and Wayfinding”). Wayfinding rode into the World Wide Web through yet another spatial metaphor, navigating a space populated by places we call sites. (“Navigation and Wayfinding”).

We have been trained to transfer ideas through space; it is how we think, where we are, and what we know (Ray). Even Gibson who defined this 'space' as 'cyberspace' believes it no longer describes our current relationship with technology. He now argues that there are not two separate realities, one of atoms and one of bits called cyberspace, but “one blended or augmented reality where atoms and bits interact and continuously influence one another” (qtd. in Ray).

Cyberspace is detached from 'real' physical space; this is because our bodies are in three dimensional space. Our rooting in three dimensional space makes space the first frame we process in terms of perception. The web, however, is like the brain, it is a system of links and nodes. This network quality has massive implications for learning (K. Carrol).

3. Transitioning: the role of the Graphic Designer.

McLuhan wrote “The artist is the person who invents the means to bridge between biological inheritance and the environments created by technological innovation.” (McLuhan 98) The technologies that define a web page, HTML5, responsive, CSS, the list goes on and on, enable us to do the things we do; technology is there to support the message, never one or the other. There is an emerging technological convergence, the process of merging technologies into new tools. Technology carries out basic tasks; through technological convergence, devices can interact with existing and other technologies leading to function (Conjecture Corporation). Gordon Moore observed that the number of transistors per square inch had doubled every year since their invention (Intel). Moore’s law predicts that this will indefinitely continue. In other words, we see more and more advances in technology and therefore more and more technological transitions occur.

If today a graphic designer combines art and technology to communicate, then our job truly lies in the transition between technology and its people. The 'new reader', our users, is the result of our relationship with hypertext. Research suggests that hypertext degrades the quality of reader engagement and lowers comprehension. Hypertext essentially fragments text (Miall), bouncing you around a room of relative information. The rapid growth and convergence of screen technology increases our exposure to hypertext and is continuing to fragment information; causing cognitive changes in comprehension, and the abstraction of screen-space itself.

4. Fragmentation & Displacement: the problem.

We tend to think of navigating a website as clicking from page-to-page via an extended input device (Bradley). The web, however, is a chaotic place where users beam from page to page at will. The web is a place of travel and like most travelers, users want to get to their destination in as little time as possible. Jessica Helfand criticized the current relationship between technology and its users in her essay “Dematerialization of Screen Space” in 2001. Helfand’s thoughts on displacement, the role of the view-port, specifically her idea that viewers are “moving targets” (Armstrong 98), highlights the concept of space in a space-less environment. This lost-in-space feeling calls for the easy-to-understand navigational cues we use off-screen. Having links scattered throughout a text can be distracting and essentially pulls the reader away from the author's vision and sequence causing displacement and information fragmentation. This is largely due to the fact that the web is not spatially navigable and our reading experiences have lost their spatial sense (Changizi), its just a lot of teleportation. A term that captures the problem surrounding fragmentation and displacement is the ‘Google-beam’.

5. The New Reader: the new user.

In order to transition this new reader into any technology or experience we first have to answer the question: how do we read online? According to usability guru Jacob Nielsen, we don’t (Nielsen, “How Users Read on the Web.”). What his group found was that readers don’t read; they skim and scan looking for keywords and anything that catches their attention.

Eye-tracking studies have a lot to reveal about the 'new-reader'. Early studies of print showed that reading was not a left to right process, first we sweep our eyes over text stopping on 'fixations' (NICHCY). The findings showed that viewers would fixate on a word, the fixations were chosen by length and whether it was a content word or function word. Similar eye-tracking studies of online reading done by the Nielson Norman Group discovered a dominant reading pattern; “F is for Fast” (J. Nielsen, “F-Shaped Pattern For Reading Web Content”). Starting from the upper left corner, users sweep their eyes across the page in a 'F' shaped pattern. Two horizontal swipes across and one vertical swipe down the left. Overall research has shown that users scan and skim web content. Seventy nine percent of users scan pages while only sixteen percent read word-by-word (J. Nielsen, “F-Shaped Pattern For Reading Web Content”).

Printed material is still statistically more engaging than on-screen material, but that line is merging. Eye-tracking studies of newspapers show that fifty five percent of words were read on printed versions while forty four percent of the same articles were read on the online version (Holmqvist). Psychologists suggest that the problem may be the excessive selectivity brought on by information’s ease of access (NICHCY).

Long term studies showed that web users changed from ‘surfing’ to what is being called ‘information foraging’ in other words skimming for something very specific. One could argue all this depends on the content and that online users are looking for an answer not a casual read. Researchers say this is because scanning is natural while reading is unnatural (NICHCY).

6. Sensemaking: a goal.

The 'new reader' certainly brings many challenges to human-computer interaction [HCI] and requires a design methodology capable of targeting the cognitive domain. HCI is an area of research and practice that emerged in the early 1980s in computer science. It consisted of cognitive science and human factors and has expanded exponentially for over four decades. Throughout its long journey, HCI has crossed many disciplines; today it is best defined as a “collection of research and practices in human-centered informatics” (J. Carroll).

Benjamin Bloom identified three domains of educational activities in 1956: cognitive [knowledge], affective [attitude], and psychomotor [skills]. The cognitive domain involves knowledge and the development of intellectual skills (Bloom). In 1983 Brenda Dervin researched sense-making as the cognitive gap present in HCI, a 'gap' that occurs in an individual's experience when attempting to make sense of observed data, between knowledge and understanding (Dervin).

To build a methodology aimed at this cognitive gap or domain, we need a humanistic approach. UCD or user centered design, is probably the most go-to method today in UX/UXD or user experience design. UX is the discipline that embodies what we do, experience as an objective. UCD is the process; how we do it. Step one is research into the “users” world to gain an understanding of what they do and how they do it, the most important step. Next, is the conceptual stage of ideation, where we address the needs found in the research stage. Once we have that idea we move on to prototypes and evaluation. Lastly we test and repeat the process.

UCD is a starting point for a humanistic approach but lacks a full 'human' picture. A current trend in humanistic interaction design is that of the persona method. A persona is a description of a fictitious person; a form of storytelling with a focus on evoking empathy for the purpose of identification (L. Nielsen). As readers we engage with characters; we have a complete description of a “human” that we relate to (L. Nielsen). The benefit of persona descriptions is that we gain understanding and empathy outside of our realities. It is the balance of data and knowledge, sensemaking, that makes the persona method a “defense against automated thinking” (L. Nielsen) in design. An HCI approach that is user-centered and based off accurate personas builds a methodology aimed at the cognitive domain.

7. Wayfinding: the cognitive side of navigation.

In 1929, Karl Lashley trained his lab rat to run a maze. One day his rat escaped near the starting point and it ran across the top of the maze directly to the goal-box. This suggested that his rat had a mental map of the territory, not a trained path to the goal. This rat's determination for cheese lead to what Edward C. Tolman coined a 'cognitive map' in 1948. (Tripp).

The prominent method for tapping into the cognitive domain and the natural world lies within interface design, specifically navigation. Navigation is, however, not by itself cognitive, wayfinding is the cognitive element of navigation (Darken and Peterson). Wayfinding does not involve movement but guides it, wayfinding and motion combine into the process of navigation. An essential part of wayfinding is the development and use of a cognitive map, a mental representation of an environment. It is the “representation of spatial knowledge in human memory” that constitutes a cognitive map (Darken and Peterson).

Spatial knowledge presents a window into the cognitive domain. In nature, information has a physical place and we know how to get to it. The web and digital media lack such spatial navigability, scale, and movement. We don’t pass familiar landmarks, we just suddenly beam into a new place, and the “journey” itself provides no information. This lack of environmental legibility is what makes the link home so crucial in web design. The components and elements of wayfinding can be applied to the screen though a spatial metaphor: navigating a space populated by places we call sites.

Wayfinding has four components. (“Navigation and Wayfinding”)

  • Orientation: Where am I am?
  • Route decisions: Where do I want to go?
  • Mental mapping: Where was I and where should be?
  • Closure: Have I arrived at the right place?

Wayfinding has five types of elements. (“Navigation and Wayfinding”)

  • Paths: Familiar streets, walkways, subway routes, bus lines.
  • Edges: The physical barriers of walls, fences, rivers, or shorelines.
  • Districts: Places with a distinct identity, such as New York.
  • Nodes: Major intersection or meeting places, such as Grand Central Terminal.
  • Landmarks: Tall, visible structures that allow you to orient over long distances.

We can carry these principles of physical world wayfinding to the screen. (“Navigation and Wayfinding”)

  • Paths: Consistent navigation paths.
  • Edges: The physical edge of the browser or view-port.
  • Regions: The identity and separation of site sections.
  • Nodes: Choices on home and major menu pages.
  • Landmarks: Consistent navigation cues & graphics that keep the user oriented.

8. Creating Space: Regions, Boundaries, Edges, and Paths

Like UI design, consistency is the golden rule of wayfinding, but if everything looks the same, there are no edges (“Navigation and Wayfinding”). If there are no edges, how does a user know when they have moved from one space to another? To begin to answer this complex conundrum we have to start at the foundation. A web system is designed on a page grid, and a solid grid allows for visual flexibility. Flexibility allows us to use identifiable regions and edges within the larger space; a space within a space. If a user moves from one region to another, a design needs to show the user that they crossed a regional boundary. One way of doing this is semantic sectioning, grouping each region into one section.

Once a solid semantic structure with regions and edges is defined we need to provide a path. The typical navigation systems we use today are path based. The most used pattern is global navigation, it is both predictable and consistent across a site. However, paths can also exist in the user’s mind, such as sites you visit daily, repeated patterns, spatial awareness, even breadcrumb trails all show you where you came from and where you want to go (“Navigation and Wayfinding”), in other words, mental mapping. The problem lies in the fact that users often arrive at a page without having followed a deliberate path, making orientation cues vital to UI's. For example, the most used form of web wayfinding is search, it cuts right across boundaries and regions and delivers you directly from one point in a site to another. This causes fragmentation and displacement, in other words, the all too familiar lost-in-space feeling.

9. Spatial Navigability: using our sense of direction to navigate a 2D plane.

In context to technology, spatial navigation is the ability to navigate between elements within a document or UI in relation to location. A good example is how pressing tab in a login form brings you to the next field, based on the semantic structure. However, CSS (cascading style sheets) allows us to break the semantic structure of sequence. Based on the fact that hypertext hides its structure and physical environments show it, spatial navigation uses our sense of direction to navigate on-screen space. A common example in use today is using arrow keys to navigate content. What this does is allows users to move through space without having to scan content, reducing fragmentation, fighting displacement, enforcing sequence, and limiting information foraging.

Spatial awareness is the understanding and use of space in an effective way that transfers to the user. This could include effective use of whitespace, grouping elements close together to create tension or relevancy, or semantic sectioning to name a few. Once again, spatial awareness starts with a grid system. A grid divides the canvas into sections that consist of margins, columns, and zones based on a mathematical system. The goal of the grid is to achieve spatial awareness.

10. Context: The subconscious realization of spatial awareness.

Spatial Context is about spatially adding the where to the who, what, why, and when. Using spatial relationships, how people interact with their surroundings and how this information is gathered and interacted with, can provide experiences that improve sensemaking. Context awareness originated as a term that described our relationship with linking changes in HCI. Spatial contextual awareness connects contextual information to users and devices. This is closely related to sensemaking because “spatial contextual awareness permits a unique, user-centered perspective in which conceptualizations imbue spatial structures with meaning” (Freksa).

Contextual cuing refers to the manner in which the human brain gathers information from visual elements and their surroundings, a spatial process. It provides guidance derived from past experiences from the physical world. In other words, attention can be guided by acquired knowledge about “spatial invariants” (Chun). Repeated proximity or groupings in the same locations, make us faster to find that object again. Tests have shown that the effect is nearly entirely subconscious.

11. Skuomorphs: the method of transitioning.

Skuomorphs, like contextual cuing are objects that retain ornamental elements of the past, derivative objects that retain ornamental design cues to a structure that was necessary in the original. Skeuomorphs offer cues of familiarity to the physical world; it has and must continue to be a major player in HCI transitions. They offer a method of bringing physical actions and associations into digital environments, through cognition, relevancy, and metaphor. A button is a button, it should ask and look like it can be pressed for being pressed is its sole purpose. The web is a place built on and with metaphors, skuomorphs apply metaphors to screen environments.

12. Actions and Interactions: all technology is an extension of ourselves.

On-screen navigation requires actions from the physical world and reactions from the interface that results in interactions. Sara Price's research in “Let’s Get Physical: The Learning Benefits of Interacting in Digitally Augmented Physical Spaces” tells us that there are three main types of physical inter-actions. The most prominent being our interactions with physical tools; the actions of drawing or writing, and interactions with a pen, chalk, paintbrush, etc. (141). These types of interactions are familiar actions and are known to “externalize cognition” (141). Next, we have physical spatial movements – walking, crawling, dancing, gesturing, etc. (141). These actions are most compatible for augmentation; we can use these to trigger various events that transform the user as part of the experience (149). The last type of interaction is combining artifacts with each other, the physical activity of placing one object spatially in relation to another, such as organizing files in a folder.

As McLuhan said, “all technology is an extension of ourselves”. The relationship between physical input and the reactive output in a system is handled by controllers. Today we use continuous controllers when accessing information. A computer mouse, for example, offers various types of physical interaction from a single device. We also have gestural interfaces powered by multi-touch GUI screens or "open" physical interaction like Nintendo's Wii or the Xbox Kinect. Gestural interfaces are primarily focused on using physical interaction to control virtual objects; this provides a way to make the digital world more tangible (bresslergroup).

Gestural controllers connect users with the surface of the screen, they require spatial awareness and are backward compatible with continuous controllers making them the perfect basis for spatial navigation. Price found that physical movements in digital environments results in “contextually relevant digital information” (145). This is because our movements are in proximity to where events occur. The result is that our actions are more physical in their interactions with the environment. Digital information is “triggered, obtained or made present as part of the interaction” (149) leading to more engaged users and sensemaking.

13. Linearized Design: Control over sequence.

Linear design is the most prominent platform for sensemaking. You’re reading linear content right now: language and ideas are streams of thought presented in order. Although users may skip back and forth in video media, they restrict their audiences to a timeline (“Webdesigner Depot”). On a website viewed within the space of a browser we see a timeline metaphor in the scroll bar. A scrollbar lets us move the page into view. We interact with this 'bar' through controllers like the mouse-wheel or touch/click gestures. A scroll bar is not only a controller, it is also timeline. It constantly shows our distance from home, the end, or goal, and our progression through a specific space, in other words it fights page displacement.

The popular blog, Web Dessigner Depot, stated it best in the article “Guiding Users with Linear Presentation”; that the real problem isn’t how to design a linear experience but “how to keep attention in a medium built on distractions”. Web designers can’t control external factors. Besides “Where am I?', a web page must constantly answer the unspoken questions: “What’s in it for me?” and “Is something better on?” (“Webdesigner Depot”). Users will always have the option to leave. Persuading users to stick to a track requires unconventional thinking.

14. Single-Page Site: A Single-Space Site

One method that activates the cognitive domain; shows the distance from home, the end or goal, and our progression through a specific space is the single-page site. Single-page sites place all of their content into one long page that users scroll. The 'above the fold' newspaper metaphor, is an antiquated idea on-screen. With such large screens, rising pixel density, and itty bitty mobile devices, the fold has vanished. It is impossible to fix a dimension to it (Levins). Today 'above the scroll' is a more appropriate metaphor.

Experience control in a simple page site is dependent on the users desire to scroll. In a multi-page website, users control the pages they visit where as in a single-page website, designers exercise that control through click, scroll, or touch events (“Webdesigner Depot”). The single-page site can also be viewed as a single-space site, it provides the big picture and reduce fragmentation by enforcing a linear progression of sequence. The pages in a book offer a pause, a moment of reflection, and time to process. The same can be applied to the single page site. Breaks can be achieved in text using images or semantic sectioning.

The single page site uses sections, these are 'pages' or regions and its 'pages' are sites. Semantic HTML allows us to place multiple sections into a single page and fight page-to-page displacement but we still have inner-page displacement. When we link to a point on a page, the standard tool that HTML provides is called an anchor. There is however a usability problem: clicking that anchor makes the user beam to the point. Because the user doesn't see any of the content between point A and point X, they feel displaced, lost; spatial awareness and the big picture is destroyed. Luckily there is a jQuery technique known as smooth scrolling that fights this inner-page displacement and fragmentation in the singe-page site. Smooth scrolling allows users to smoothly scroll the page to the destination content, but take a short time to do so.  This time allows the user to see what content is being skipped over while providing a clear sense of their current location. The effect is aesthetically pleasing and ensures users keep their bearings and the big picture is not fragmented (Rohler).

15. Linear Navigation: following a logical path.

If the single-page site follows sequence so must its navigational paths. Linear Navigation can be referred to as program control and Non-Linear Navigation can be referred to as learner control (Martin). The idea behind linear navigation is that the visitor follows the pages in a predefined sequence that is determined by its semantic structure. The bread crumb trail is a type of linear navigation; a step-by-step or page-by-page goal driven system. The user starts at one end and continues to the conclusion or goal.

The biggest problem with linear structures is that you will move farther and farther away from the home page, which can be confusing on websites with multiple pages (Bradley). Therefore, this structure calls for a single-page site where you are always home; never leaving the 'yellow brick road'. One way of looking at this is rather than bringing the user to the content, we are bringing the content to the users view-port; moving them in space; travel.

The goal of most navigation systems and any linear structure is to achieve a logical progression of thought and paths to ever increasing content (DeFazzio). Linear Navigation follows a straight line through a website, it allows the visitor to move back and forth through the sequence. If used correctly, it is very effective for storytelling, which pairs it with the persona method and traditional graphic design.

16. Establishing a Mental Model: providing directions.

The advantage of the cognitive domain is to provide the users with a mental model, beliefs about how the system works. The theory is that users interact based on those beliefs. The primary objective of HCI is usability, and the closer the designed or 'represented' model comes to the users mental model, the easier the user will find the system (Cooper).

A mental model is channeled through the process of mapping; natural mapping is a process of using physical analogies or metaphors to achieve instant comprehension (Cooper). Mental models allow users to comprehend and more importantly remember the mappings between actions and interactions or the result of an action, this is building a cognitive map of an interface, just like Lashley's rat had of the maze. People like to understand how something works, why it works, and where they are working; spatial metaphors in design can help users establish a 'real' model and make sense of observed data.

17. Dematerialization: Rematerializing Screen-Space.

Cyberspace is a place to interact and act, but that space is disconnected from the body, it is cognitive. Dematerialization of space represents the abstraction or separation of physical and information space. Real world human interactions are channeled through technology, reducing human interaction to function. We have been trained to transfer ideas through through space, it is where we are, and what we know.

Sensemaking is delivered through an experience, an incident that we as viewers try and gain understanding in and extract meaning from. A design approach that is user-centered and based off personas builds a methodology aimed at the cognitive domain. Rematerializing this on-screen space is the process of bridging physical and virtual space with cognitive space. Spatial metaphors, skuomorphs, and spatial awareness help our bodies travel with us through virtual space. Wayfinding and maintaining a sense of 'travel' transforms functions into actions and reduces the foraging that defines the gap in the 'new-reader'. A single-space site contains the big picture in one spatially navigable space, allowing for metal mappings. This deeper understanding of spatial experiences embraces our relationship with screen surface and begins to 'rematerialize' or actualize screen space; enabling sensemaking when navigating on-screen information.

Visual Presentations

Works Cited.

Al-Fedaghi, Sabah. “Awareness of Context and Privacy.” Association for Information Science and Technology. Jan. 2012. Web. 4 Mar. 2013.

Armstrong, Helen. Graphic Design Theory, Readings From The Field. New York: Princeton Architectural Pr, 2009. Print.

Bloom, Benjamin S., and Benjamin S. Bloom. Taxonomy of educational objectives, handbook 1, cognitive domain. Boston: Allyn and Bacon, 1984. Print.

Bradley, Steven. “Exploration Of Single-Page Websites.” Smashing Magazine. 5 Nov. 2012. Web. 3 Mar. 2013.

Bresslergroup. “Interaction Design - Physical Interaction Design.” Bresslergroup. Web. 4 Mar. 2013.

Carroll, John M.. "Human Computer Interaction (HCI)." The Encyclopedia of Human-Computer Interaction, 2nd Ed.. Soegaard, Mads and Dam, Rikke Friis (eds.). Aarhus, Denmark: The Interaction Design Foundation, 2013.

Carroll, Ken. “Linear and Non-linear Learning.” Ken Carroll’s Remarkable Writing Techniques. 2007. Web. 30 Jan. 2013.

Changizi, Mark. “The Problem With the Web and E-Books Is That There’s No Space for Them.” Psychology Today. 7 Feb. 2011. Web. 4 Mar. 2013.

Chen, Guanling, and David Kotz. A Survey of Context-aware Mobile Computing Research. Technical Report TR2000-381, Dept. of Computer Science, Dartmouth College, 2000. Web. 4 Mar. 2013.

Chun, Marvin, Jiang, Yuhong. "Contextual cueing: Implicit learning and memory of visual context guides spatial attention". 1998. Cognitive Psychology 36 (1): 28–71. Web. 4 Mar. 2013.

Conjecture Corporation. “What Is Technological Convergence?” wiseGeek. Web. 3 Mar. 2013.

Cooper, Alan. About Face 3: The Essentials of Interaction Design. 3rd ed. Indianapolis: Wiley, John & Sons, Incorporated, 2012. eBook.

Darken, Rudolph P., and Barry Peterson. “Spatial Orientation, Wayfinding, and Representation.” Handbook of virtual environments (2002): 493–518. Print.

DeFazzio, Gene. “Navigation of Website Content.” Rocketface Graphics. 2013. Web. 5 Mar. 2013.

Dervin, Brenda. “Sense-Making Studies.” Ohio State University. 28 Mar. 2012. Web. 3 Mar. 2013.

Freksa, Christian, Alexander Klippel, and Stephan Winter. “A Cognitive Perspective on Spatial Context.” Spatial cognition: Specialization and integration 05491 (2007): n. pag. Web. 4 Mar. 2013.

Holmqvist, Kenneth et al. “Reading or Scanning? A Study of Newspaper and Net Paper Reading.” Mind 2.3 (2003): 4. Print.

Intel. “Moores Law.” 22 Mar. 2005. Web. 3 Mar. 2013.

Levins, William. “Stop the Above The Fold Web Design Insanity - People Scroll Today!!!!!” Nuvonium. 26 June 2011. Web. 4 Mar. 2013.

Martin, F., and J. Klein. “Effects of Objectives, Practice, and Review in Multimedia Instruction.” Journal of Educational Multimedia and Hypermedia 17.2 (2008): 171–189. Print.

McLuhan, Marshall. Laws of media. University of Toronto press, 1988. Print.

Miall, David, & Teresa Dobson. "Reading Hypertext and the Experience of Literature." Journal of Digital Information [Online], 2.1 (2001): n. pag. Web. 3 Mar. 2013

“Navigation and Wayfinding.” Web Style Guide 3rd Edition. Web. 3 Mar. 2013.

NICHCY. “How People Read on the Web.” National Dissemination Center for Children with Disabilities. Aug. 2011. Web. 3 Mar. 2013.

Nielsen, Jakob. “F-Shaped Pattern For Reading Web Content.” Nielsen Norman Group. 17 Apr. 2006. Web. 3 Mar. 2013.

Nielsen, Jakob. “How Users Read on the Web.” Nielsen Norman Group. 1 Oct. 1997. Web. 3 Mar. 2013.

Nielsen, Lene. "Personas." The Encyclopedia of Human-Computer Interaction, 2nd Ed.. Soegaard, Mads and Dam, Rikke Friis (eds.). Aarhus, Denmark: The Interaction Design Foundation, 2013.

Price, S., and Y. Rogers. “Let’s Get Physical: The Learning Benefits of Interacting in Digitally Augmented Physical Spaces.” Computers & Education 43.1 (2004): 137–151. Print.

Ray, PJ. “The Myth of Cyberspace.” The New Inquiry. 13 Apr. 2012. Web. 3 Mar. 2013.

Rohler, Nathan. “Quick Guide: Adding Smooth Scrolling to Your Webpages.” DWUser.com Education Center. Web. 4 Mar. 2013.

Rouet, M., Rouet, J.F., Epstein, I., & Fayard, P. (2003). Effects of online reading on popular science comprehension. Science Communication, 25 (2), 99-128.

Tripp, Steven. “Cognitive Navigation: Toward a Biological Basis for Instructional Design.” Educational Technology & Society. 2001. Web. 3 Mar. 2013.

Webdesigner Depot. “Guiding Users with Linear Presentation.” Webdesigner Depot. 6 Aug. 2012. Web. 4 Mar. 2013.

SPATIAL

Spatial Perception Affords Technological Interaction And Learning.

by Rhett Forbes