This blog post contains my design diary of the tangible and embodied interaction class. Every week had a different topic within the field where my group had to come up with concepts and prototypes, both wireframes and video prototypes. The videos have links. The readings are my reflections and insights on the topics as well as my group progress and concepts.
Monday, 6th November
On Monday, a new class named Tangible and Embodied Interaction began. The class includes two modules. The first module will have a new topic each week while the second one will have a bigger project. The first module started on Monday with a lecture about the topic “glanceability” which is the week’s topic as well.
Glanceability refers to clear visual information on screen displays which can be understood by a glance of seconds. Apart from visual, we humans can understand information from our other senses such as touch and hearing. A vibration from a cellphone tells us that we have a notification or message. A continuous vibration implies to an incoming call. This is called human cognitive.
The task for this week is to design paper prototypes on the glanceable behavior of a multi-screen UI. The class was divided into groups of three. We have to make a video prototype which highlights the interactions of the paper prototype.
Tuesday, 7th November
My group met early on Tuesday to decide which topic and digital screens to work with. We decided to work with the topic “grocery shopping” and use a smartwatch screen and a smart glass. We want the users to find food which helps them to fulfill their dietary reference intake (DRI) regarding calories, calcium, e.t.c. Our other ideas were about speeding the traffic and saving money.
The class also had a seminar with some readings about glanceability as well as peripheral displays. Two readings were mandatory and one was optional. In groups, we got a bunch of questions to discuss for each paper.
The first reading was called “Designing and Evaluating Glanceable Peripheral Displays” by Tara Matthews.
The reading was about guidelines for designing glanceability peripheral for displays. I learned a lot about glanceability as well as peripheral with this paper. A peripheral display is a display where the user can have several activities or objects in the background on the edges of the display. These objects are not in the user’s main focus but the user is aware that they exist. The text is a qualitative analysis because it provides insights of designing glanceability and gives me a further understanding of the topic. So the paper is certainly valid for this week project as well as design projects in the future which requires fast and understandable information. In the group, we discussed that all the displays can be a peripheral display depending on where your focus is. The edges will be blurry but we get information about them at the same time.
Glanceability is a topic worth thinking about for interaction designers. It’s important that there is a decent amount of information on the displayed screen. That is both quick and understandable to read. If a screen displays good enough information then it will decrease the confusion for the user and save some time.
The second reading was called “Exploring the Design Space of Glanceable Feedback for Physical Activity Trackers” by Rúben Gouveia, Fábio Pereira, et al.
The text was about exploring glanceable behavioral feedback with physical activities through a watch. The paper was really interesting and taught me how to do a design progress in glanceability. The paper was a quantitative analysis because it aims to understand the glanceable behavior as well as observes several participants’ physical activities during a month. The authors gave open-ended questions to the participants about their experience with the prototypes. The text is valid for us interaction designers because the paper teaches us how to do our own research about behavioral glanceability. We discussed a lot about feedback in the group. That feedback is always output which responses to either explicit or implicit inputs. We also discussed behavioral feedback. The user’s behavior changes based on the information on the screen. For example, a user who decides to take another walk based on the displayed remaining steps to complete a goal.
The third reading was called “Evaluating Peripheral Displays” by Tara Matthews, Gary Hsieh, and Jennifer Mankoff which was a continuation of the first paper.
Wednesday, 8th November
We continued to work on the wireframes on Wednesday. We decided to make the wireframes in Illustrator and printed them out for the video prototype. We put further
focus on the connection between the watch and the glasses. The user set their preferences through an app and the glasses scan the barcodes of the items. After the user has scanned the product, the user will get feedback through the smartwatch that tells the user which of the two items will help the user the most.
These are the wireframes. The wireframe at the left corner (first row) is the start menu whereas the user can enter the settings, edit a category, or set a new category for DRI. The following wireframes, the user can pick which category and subcategory. The fourth wireframe tells the user to wear the glasses. Lastly, the final wireframe shows the feedback after the scanning. All the wireframes can be understood with a quick look.
Thursday, 9th November
On Thursday, we made smartwatch and video prototypes. We used the wireframes from Wednesday to demonstrate the interactions in the video. We realized that displaying feedback through the smartwatch was not a good idea. The interaction is complicated for the user and we thought we did not use the smartglass enough. To have a smartglass just for scanning, made the glasses pointless which could have been replaced with any camera. The smartglass display needs to be beneficial for the user. We decided to have the information in the smartglass instead so the user don’t need to look down at the watch. The smartglasses scans the entire shelves in the grocery store instead of just two products by barcodes. This change will save the user’s effort of grabbing two items and scan them. The change also makes the glasses more necessary for the user. For example, the user could have compared the two items’ amount of protein without the technology but searching within an extended number of items would take a significantly longer time.Therefore, the glasses become valuable in the situation. The glasses will keep the user updated during the scanning till the result. The glasses provide the user glanceable information regarding the items’ location. The glanceability is behavioral, in other words, the glasses make the user take the specific item.
The picture on the right informs the user that the glasses are scanning while the one on the left shows the user visuals on the surrounding area where the necessary items exist. The glasses will scan when the user has a specific distance of the shelves.
The video prototype can be seen here:
Friday, 10th November
We got some feedback after the presentation on Friday. The concept was received as clear and easy to understand. However, the teacher questioned the glanceability of the smartglass display because the information would pop out in front of the user. He compared it with putting up signs in the store. I can agree with the teacher on this statement. It’s glanceable to a certain extent due to the easy and fast information. In the video prototype, we forgot to add a detail which we discussed during Friday morning. The highlighted area of the items will always be there after the scanning and never disappear. The information of the display is augmented reality, it exists on the screen with a real-world environment. Imagine if a user went to the store several times using this product. After some time, there would be a lot of highlighted items in the store. At times, the user can forget where some of the items were. However, the scanned items’’ information is always there. With the glasses, the items’ location would appear in the user’s peripheral vision so the user would once again know where the items are located. If the information would pop up and disappear completely, it wouldn’t be glanceability because the information is not
there anymore. For instance, the user can glance at the watch’s screen of the remaining steps because it’s easy to access and it’s always there.
I have gained several insights about glanceability and peripheral. Glanceability is fast and quick information which can be understood in seconds. Otherwise, the user’s interaction would be labeled as reading or looking. To design with glanceability, the information should always there and easy to access. For instance, an electronic board displaying arrivals or departures which are very easy to access due to the huge size. Peripheral implies if there are several things at the edges of one’s vision. A peripheral display has objects in the background which the user can choose to focus on. For instance, a program downloading or an anti-virus software conducting a scan. The programs are not in the user’s main focus, thus they are in the background. I have also learned about behavioral glanceability whereas the glanceable information changes the behavior of the user. The small project this week gave me the insight that the information should always exist if it should count as glanceable.
Overall, glanceability and peripheral are important design aspects for interaction designers to be aware of. Glanceability can give the users a better comprehension as well as changing their behaviors while peripheral display can have several activities at once which the user can choose to focus on.
Monday, 13th November
We had a lecture which introduced us to the new topic. The topic for the week is “quantified self” which means lifelogging or self-logging. For instance, users logging their steps for various reasons like gaining the knowledge of how much they have walked, reaching a goal or competing with someone else. Lifelogging doesn’t have to be quantitative as it can be qualitative as well to get a better comprehension regarding a problem. Digital products and software have made our tracking simpler than ever before for the users to complete their endeavors and improve upon themselves. The initiative of the logging can either be explicit or implicit but the purpose is to aid the user (improvement is common as well as the optimization).
Collecting data through quantified self is important for interaction designers and their research. Interaction designers need to track the user’s activities to overview the responses of their interaction regarding the designers’ product or prototypes to find a solution or a problem. An observation usually takes place during a number of days.
Any kind of self-tracking regardless of the approach is an objectification. We observe different aspects of our lives and objectify them, for instance, our steps or pulse. By objectifying our aspects, we create potentials in optimisations, self-improvement, and self-experimentation which is narcissistic. We want to improve ourselves individually.
In order to choose a new topic for the week, we asked ourselves what we would log or track the most. We came up examples of hygiene, how often we talk to family and pictures/selfies in social media.
We started with the social media topic. In this example, we would log how often the users take pictures merely of themselves as well as pictures with other people. We would focus on Instagram only. We wanted to explore the question – how often do we present ourself (opinions/visual representation)? The idea was to implicitly track the user’s uploads so their actions can be presented and reflected over. The tracking activity would be visually shown for all the users with some sort of indication. We realized that this was not a good idea because it didn’t give any room for self-reflection and improvement. Users who take a lot of pictures of themselves won’t change the behavior because it’s their choice. They would neither appreciate being
labeled as narcissistic. The tracking is not explicit, more like a surveillance of the pictures. We decided not to continue with this concept because we could not argue why the concept would be accepted in practice or relevant. We had a difficult time with ideation regarding what we wanted to do. We spent too many hours brainstorming and discussing trying to create a decent concept.
Tuesday, 14th November
On Tuesday, we decided to go back to our other examples and began to work further with the contact concept. Time is an essential resource for a one week project so we rather took our previous example than creating a new one. We expanded the example to include all people the user’s’ desire to stay in touch with. Even though we knew which concept to explore the problem persisted to exist. We wondered whether we should track the last time the conversation was active or the lack of initiative by the user. We decided to track the lack of initiatives because it’s a big factor in ruined relationships. It’s an area which the user can enhance upon. The concept is based on an app which implicitly collects data of conversations through several social media and software. We want to help users to improve the contact with acquaintances by taking the initiative in the conversations more often. The app will function across several social media which will be further explained with the wireframes. The app will display the initiatives on both sides revealing which person who took the most initiative. To summarize, the intention of the concept is to make the user self-reflect by the lack of initiatives. If the user rarely took the initiative to start a conversation compared to the other person then the application will give the user a “guilt” feeling by highlighting the comparison.
These are wireframes of the application. The left wireframe shows the startup screen with the logo and name. The following wireframe shows the user’s contact list whereas the logo indicates that their conversation is being tracked. Displaying the logo is needed to remind the user in social media which conversations are tracked. Interacting with the plus sign allows users to add new contacts. The last wireframe is the selection of media, for example, Facebook and Skype. We made this design idea due to the users might not desire to track all the social media. For instance, Snapchat which is heavily based on sending pictures. Sending pictures might have a lower communication value for one’s mother compared to a call. The circles should contain the logos of diverse media.
The first wireframe is the artifact’s notification to the user. It illustrates a notification on the lock screen to remind the user to initiative contact so it won’t be a one-sided relationship. The notification tells the user detailed information of total initiatives on both sides. The user has initiated the conversation 3 times in the last month while “Mom” has initiated 20 times. If the user interacts with the notification then the application will start and display the same text. An addition to the text will be visual as well that tells the user’s latest initiative occurred. After the user has clicked on “Take initiative!” button, the wireframe to right will appear which let the user choose the media of interest for the initiative. There might be situations for specific medias so we allow the user to pick between all the tracked medias.
The application is tracking through social media which is visible by the logo beside their name. This is an indication to help the user to understand which person is being tracked. The logo will behave as a button that will display information if pressed. The information is glanceable information of how many times the user and the contact have initiated the conversation during the last 30 days. The latest media where the initiative occurred will also be visible. This will help the user to remember better about the latest initiative. The design will give the user a guilt feeling hopefully convincing the user to take initiative.
Wednesday, 15th November
On Wednesday, there was a seminar based on the book “The Quantified Self” by Deborah Lupton. We discussed the chapters of the book regarding self-tracking and self. We had interesting discussions about the topics with the teacher’s involvement. Three words were mentioned by the teacher that I thought was important: power, responsibility, and subject.
The definition of power can be defined as either ordering or telling another person what to do. Thus, one has shown power to the other person. In self-tracking, the word power is associated with the user. For instance, an application logging the number of calories each day tells the user what to eat and how much. This will change the user’s behavior regarding the daily meal intakes. This is the case of my group’s concept as well. We tell the user to take the initiative in a conversation with glanceable behavioral feedback.
Responsibility is a response to a situation while irresponsible is the opposite. Using a device for self-tracking purposes as self-improvement and optimisation is a response of the user. The users have the responsibility to endeavor and achieve the goal of interest. A device is an aiding tool for the user which makes the user more organized. Therefore, there is a better chance to achieve the goal quicker. In another hand, the device helps the user control the progress by visually keeping the track of the data. This can lead to the user being more motivated at achieving a goal or do another kind of interaction.
The subject in our concept is to help the user enhance conversations in all kind of relationships. To highlight the importance of a relationship shouldn’t be one-sided that both parts should take initiative. It’s an important subject to discuss.
Thursday, 16th November
During Thursday, we made the video prototype for the presentation on Friday. We did not make any changes towards the concept.
The video prototype can be found here:
Friday, 17th November
During the presentations, we got the opportunity see other groups presentations and concepts. The other groups had some interesting concepts as well such as tracking bullying and tracking laugh. It felt informative to watch their videos. One group with the concept of managing work time mostly utilize their video as their presentation. It explained their ideas very well. We spent most of the time of our presentation talking about the concept. We also showed our video which also explained the concept. It felt like we were over explaining it to an extent.
The presentation overall went really good and we got some good feedback. There is a risk for passive aggression if both sides got a notification. The person who takes the initiative more compared to the other part might be irritated by always taking the initiative. The person might realize the fact and refuse to further initiative contact. Is
this a reaction because of the display numbers? It might be the problem If the person sees a huge difference in numbers. A simple text which says that the person has taken the initiative more than the other part might reduce the aggression. For the reason that the person does not see the difference anymore. There is an important aspect regarding the value of different communications. A text message might not be as valuable as a call. Sending a picture through Snapchat is not good for a conversation like a text message. How could we design a system that values different communications? We could implement an options menu where the persons in the conversation could select which way of communication they prefer. This information could be display at the notification as well. Thus, the preferred approach of communication will be more valuable.
Self-tracking is an important feedback for interaction designers, not just for collecting data but also being able to change a user’s behavior. Quantified self is to improve oneself to become a better person or optimize the situation. Tracking can be more than just self-logging without any digital means such as a farmer counting animals of a herd with a pencil. However, the digital technology helps us to control and organize the logging to reach our goals. Working with self-tracking has been really interesting and enjoyable. It’s a topic that raises a lot of questions and even provokes a lot of thoughts, for instance, how much of our lives would we allow to be tracked?
Self as a topic is a tremendously broad which is important to be familiar with. To know the reasons behind the people’s interactions with the products. Self-tracking is definitely something I will continue to discuss and explore in my further projects.
Monday, 20th November
The topic of the third week is ubiquitous computing(ubicomp). Ubicomp is a concept of computer science where computing appears anytime and everywhere. Ubicomp is also known as pervasive computing, ambient intelligence and “everyware.”
The technologies supporting ubiquitous computing are the Internet, artificial intelligence, advanced middleware, operating system, mobile code, sensors, microprocessors, new I/O and user interfaces, networks, mobile protocols, location and positioning and new materials.
The brief for the week is to come up with new affordances for a multi-device voice/sound-based interaction system while focusing on one of the characteristics as following: Learnability, exploration, breakdowns as well as errors & deviations and social & cultural experiences.
We had a lecture where the teacher explained all of the characteristics as well as ubicomp itself. Afterwards, my group started to brainstorm ideas which lead to a creation of a decent concept. A concept that focuses on learnability as well as exploration and error. The concept is a device which works as expected while being actively used. However, when the device is not being frequently used by the user in a long time, it becomes lazy and stupid. It will need attention sometimes like a flower. Imagine a voice assistant, for instance, Siri or Google Assistant. Their behavior would change along with the inactivity. They would become moody and tell the users to search for an answer themselves. We were discussing whether to choose music or an assistant regarding this concept. Music, in this case, would become lower quality and give the user a horrible music experience. Music is too vague for the concept, we liked the idea of an assistant better because of the voice of the assistant as well as the extended amount of interactions an assistant provides. We expanded upon the idea by the thought of making the assistant smarter by interacting and offers further features. For example, the first stage the assistant will play the desired song for the user. After some time being active, the assistant can suggest songs the user would listen to. It learns information about the user as long as the user interacts with it. It exhorts the user to interact for two reasons. The first reason is the assistant’s
ability to continuously gain new features to benefit the user by being highly active. The second reason is the essential activity to avoid the assistant becoming lazy and moody. We want to explore the question – What if a smart A.I. shows human attributes? We already behave Google Search as a best friend what if we take that further?
It becomes like a muscle for the A.I. Just like learning a new language, training a new skill or do physical activities. If you don’t frequently pursue these activities, your skills will become worse after some time. This is a concept that doesn’t solve any problems for the users’ merely it solves a question for designers – “How do we keep the user engage in the product frequently?” The design of the idea is critical which raises questions rather than answering them.
The new affordances with this concept are that the assistant finds enhanced and newer ways to ask and understand the user. The user won’t significantly notice these changes due to they occur in the background. The other new affordance is the assistant need to always be active is essential to keep high quality otherwise it will become worse. The affordance will become natural for the user as one has learned to treat it more as a human being.
Tuesday, 21th November
On Tuesday, we had a lot of problems considering the concepts. One group member was critical towards the current concept. During the whole day, our group either tried to improve further upon the concept or create a completely new. There would be an interpret problem regarding the lower quality sound. The user’s notion of the situation could be misunderstood which eventually leads to the user returning the product. An improvement would be if the assistant would articulate to the user that it has been inactive for a long time. Hence, the quality becomes worse. Another approach for improvement is the ecological approach which the user can easily understand the interaction by the design of the product. An implementation of this approach could be a visible measuring indicator on the assistant whereas the left side illustrates inactivity and the right side activity (a sad face and a happy face could work as well). The needle would indicate the assistant current “mood” as shown in the picture below. The user would comprehend the “mood” of the assistant without interacting with it. Thus, the user would understand the situations that would occur, for example, the music scenario more clearly. By connecting the worse quality with the “mood”.
The constant discussion of which concept we should pick or improve upon took many valuable hours. We got completely stuck which was a significant adverse impact on our project than last week. Lastly, we went back to the beginning and created a new concept. A concept where the user interacts with the diverse system by gestures. For instance, clap to turn on a lamp. However, the system doesn’t respond based on the gestures but the sounds of the gestures.
I thought it would have been better to improve the concept we already had instead of starting over. Now we have to further discuss the concept as well as finding new affordances. This will be very stressful. The stress can be a crucial factor to reduce the quality of our work.
Wednesday, 22th November
On Wednesday, we had a seminar regarding following three texts –
1. The computer for the 21st century. Mobile Computing and Communications by Weiser, M.
2. Seamful interweaving: heterogeneity in the theory and design of interactive systems. by Chalmers, M., & Galani, A.
3. Technology affordances. by Gaver, W. W.
In our groups, we discussed and selected keywords of the texts which we further discussed with the whole class at the seminar.
Our keyword for the first text by Weiser was “adapt”. Weiser tries to comprehend how computing in the 21st century will be like. He mentioned ubiquitous computing and our systems working in the background. He wrote at the end of his paper that machines are made to fit into the human environment. Hence, we chose the word to adapt. Machines adapt to our everyday lives regarding talking, working, safety and much
more. When technology fades into the background it becomes ready-to-hand which be used for our main focus. The technology while adapted can be used anytime and anywhere to either be used or display useful information for the user.
Our keywords for the second text by Chalmers, M., & Galani, A are unified-media and peers. Treating media as a single entity connected to a larger web of digital media leads to a better understanding of the process of interweaving, accommodation, and appropriation for the media. Designers treating technology as a non-isolated media to create a unified seamless experience across users. This can achieve a more informative design through connecting the experience. By viewing different digital medias as peers, rather than treating any space as the primary tool of focus.
Our keywords for the third text by Gaver, W. W. are perspective, experience, and presentation. The experience users have with the product which lead to their The interpretation of the presentation of an object is affected by a person’s perspective.
Medical equipment is self-explanatory for a specialist in area. A door handle is self-explanatory as well but for all kind of users. The users understands by a glance that they need to grasp the handle to open the door. Products with cognitive approaches need experience/presentation in order to comprehend while the ecological approach can be comprehended just by the design. The ecological approach is more valuable for than cognitive for everyday people due to it’s easy to understand.
Designers should think about the cultural differences around the world while designing cognitive and ecological approaches. The diverse cultural perspectives might interpret the design differently. They also need to start with the human conditions first and what affordances users need.
The cognitive approach is about mastering a skill as well. The product might give users a struggle at first. However, after spending some time to learn the product, users will eventually have a better understanding of it and a smoother experience. The product will become ready-to-hand. It won’t no longer be about the product anymore but rather how and why users are interacting with it. For instance, a computer mouse.
In the seminar, we discussed how all affordances are new to one point depending on the experience. An old affordance for a user could be a new affordance for another user. Everything that matters with affordances is familiarity. When designers defamiliarize an affordance, it becomes a new one which we haven’t seen before.
Not all the affordances are perceptible. Affordances can also be hidden and false. For instance, a button that doesn’t work thus it doesn’t have any function. Therefore, it’s a false affordance. A hidden affordance is an affordance that is not obvious to the users. There are possibilities for action but they aren’t perceived by the user. For example, a stone being used to hit a nail instead of a hammer.
After the seminar, my group worked more on the concept as well as creating a video prototype. The characteristic aspect that we explored was exploration. The user explore the inputs with preset sounds. Users can interact with the device through their voice whereas the users need to say the system of interest, for example the TV. When the product knows the connected system, the users need to make a sound with gestures. The system will read these sounds as an input and provide feedback such as turn on and off. The new affordance in the concept is how we defamiliarize the way we interact with our systems at home. We are familiar to interact with the systems physically by using our hands which is now changed to the sounds of our gestures. The design is a cognitive approach, people won’t understand the device just by the design but with experience. Any affordance that don’t succeed to help the user to understand the product is a bad design.
Here is the link to the video:
Friday, 24th November
On Friday, it was finally time to present the concept. Just like the previous week, the class had some interesting concepts. For example, one group had a device which turns on the lamps based on the force of the front door. If a person put a decent amount of force to shut the door, it will make a bang sound which turns on the lamps in the living room. If there is no force, the door won’t make any sound and hence the lamps won’t turn on. This is a good explicit design with good affordances. We are familiar to open and close a door gently or with force based on the situation. The idea won’t cause any confusion for the users.
We got some good feedback on our concept. What would happen if the user clap while watching a football match? There should be options to prevent implicit interactions to occur in the home. Another feedback was that there is an “owner” of the interactions at home. The visitors would need to always interpret the interactions. Some visitors might interpret the interactions in different ways. There should be universal gestures towards the system, so people don’t have to interpret.
To next week, our group really need to improve the self-criticism. Valuable time will be lost if we got stuck in time-consuming discussions about various concepts. We would’ve articulated our concept further if we put more time into discussing our chosen concept.
Affordances are essential for interaction designers. Creating with affordances in mind, products would become more understandable for users by glancing on the products. In other words, affordances help users to interpret the design correctly and therefore use it correctly. Otherwise, it’s not a good design. This knowledge is useful to design for everyday people which interaction designers do. The technology becomes more ubicomp in the sense of smarthomes. Even when we aren’t home the computers are always accessible for us. Anytime and anywhere in forms of smartphones, smartwatches and laptops. Therefore, computers become tools for us in our everyday lives working in the background. First, we learn the tools then they become additions of ourselves. These additions can aid us with creation, ideation and ease throughout our days.