Diary of the Tangible and Embodied Interaction Class

This blog post contains my design diary of the tangible and embodied interaction class. Every week had a different topic within the field where my group had to come up with concepts and prototypes, both wireframes and video prototypes. The videos have links. The readings are my reflections and insights on the topics as well as my group progress and concepts.

Module 1:1

Week 1

Monday, 6th November

On Monday, a new class named Tangible and Embodied Interaction began. The class includes two modules. The first module will have a new topic each week while the second one will have a bigger project. The first module started on Monday with a lecture about the topic “glanceability” which is the week’s topic as well.
Glanceability refers to clear visual information on screen displays which can be understood by a glance of seconds. Apart from visual, we humans can understand information from our other senses such as touch and hearing. A vibration from a cellphone tells us that we have a notification or message. A continuous vibration implies to an incoming call. This is called human cognitive.
The task for this week is to design paper prototypes on the glanceable behavior of a multi-screen UI. The class was divided into groups of three. We have to make a video prototype which highlights the interactions of the paper prototype.

Tuesday, 7th November

Seminar

My group met early on Tuesday to decide which topic and digital screens to work with. We decided to work with the topic “grocery shopping” and use a smartwatch screen and a smart glass. We want the users to find food which helps them to fulfill their dietary reference intake (DRI) regarding calories, calcium, e.t.c. Our other ideas were about speeding the traffic and saving money.
The class also had a seminar with some readings about glanceability as well as peripheral displays. Two readings were mandatory and one was optional. In groups, we got a bunch of questions to discuss for each paper.

Text 1

The first reading was called “Designing and Evaluating Glanceable Peripheral Displays” by Tara Matthews.

The reading was about guidelines for designing glanceability peripheral for displays. I learned a lot about glanceability as well as peripheral with this paper. A peripheral display is a display where the user can have several activities or objects in the background on the edges of the display. These objects are not in the user’s main focus but the user is aware that they exist. The text is a qualitative analysis because it provides insights of designing glanceability and gives me a further understanding of the topic. So the paper is certainly valid for this week project as well as design projects in the future which requires fast and understandable information. In the group, we discussed that all the displays can be a peripheral display depending on where your focus is. The edges will be blurry but we get information about them at the same time.
Glanceability is a topic worth thinking about for interaction designers. It’s important that there is a decent amount of information on the displayed screen. That is both quick and understandable to read. If a screen displays good enough information then it will decrease the confusion for the user and save some time.

Text 2

The second reading was called “Exploring the Design Space of Glanceable Feedback for Physical Activity Trackers” by Rúben Gouveia, Fábio Pereira, et al.
The text was about exploring glanceable behavioral feedback with physical activities through a watch. The paper was really interesting and taught me how to do a design progress in glanceability. The paper was a quantitative analysis because it aims to understand the glanceable behavior as well as observes several participants’ physical activities during a month. The authors gave open-ended questions to the participants about their experience with the prototypes. The text is valid for us interaction designers because the paper teaches us how to do our own research about behavioral glanceability. We discussed a lot about feedback in the group. That feedback is always output which responses to either explicit or implicit inputs. We also discussed behavioral feedback. The user’s behavior changes based on the information on the screen. For example, a user who decides to take another walk based on the displayed remaining steps to complete a goal.
The third reading was called “Evaluating Peripheral Displays” by Tara Matthews, Gary Hsieh, and Jennifer Mankoff which was a continuation of the first paper.

Wednesday, 8th November

We continued to work on the wireframes on Wednesday. We decided to make the wireframes in Illustrator and printed them out for the video prototype. We put further
focus on the connection between the watch and the glasses. The user set their preferences through an app and the glasses scan the barcodes of the items. After the user has scanned the product, the user will get feedback through the smartwatch that tells the user which of the two items will help the user the most.

wireframe1.JPG

These are the wireframes. The wireframe at the left corner (first row) is the start menu whereas the user can enter the settings, edit a category, or set a new category for DRI. The following wireframes, the user can pick which category and subcategory. The fourth wireframe tells the user to wear the glasses. Lastly, the final wireframe shows the feedback after the scanning. All the wireframes can be understood with a quick look.

Thursday, 9th November

On Thursday, we made smartwatch and video prototypes. We used the wireframes from Wednesday to demonstrate the interactions in the video. We realized that displaying feedback through the smartwatch was not a good idea. The interaction is complicated for the user and we thought we did not use the smartglass enough. To have a smartglass just for scanning, made the glasses pointless which could have been replaced with any camera. The smartglass display needs to be beneficial for the user. We decided to have the information in the smartglass instead so the user don’t need to look down at the watch. The smartglasses scans the entire shelves in the grocery store instead of just two products by barcodes. This change will save the user’s effort of grabbing two items and scan them. The change also makes the glasses more necessary for the user. For example, the user could have compared the two items’ amount of protein without the technology but searching within an extended number of items would take a significantly longer time.Therefore, the glasses become valuable in the situation. The glasses will keep the user updated during the scanning till the result. The glasses provide the user glanceable information regarding the items’ location. The glanceability is behavioral, in other words, the glasses make the user take the specific item.

wireframe2.JPG

​ ​
The picture on the right informs the user that the glasses are scanning while the one on the left shows the user visuals on the surrounding area where the necessary items exist. The glasses will scan when the user has a specific distance of the shelves.
The video prototype can be seen here:

https://drive.google.com/file/d/1modvM2mUL-ZG84u63_2BiFLsC47GDIEx/view

Friday, 10th November
​​
Presentation

We got some feedback after the presentation on Friday. The concept was received as clear and easy to understand. However, the teacher questioned the glanceability of the smartglass display because the information would pop out in front of the user. He compared it with putting up signs in the store. I can agree with the teacher on this statement. It’s glanceable to a certain extent due to the easy and fast information. In the video prototype, we forgot to add a detail which we discussed during Friday morning. The highlighted area of the items will always be there after the scanning and never disappear. The information of the display is augmented reality, it exists on the screen with a real-world environment. Imagine if a user went to the store several times using this product. After some time, there would be a lot of highlighted items in the store. At times, the user can forget where some of the items were. However, the scanned items’’ information is always there. With the glasses, the items’ location would appear in the user’s peripheral vision so the user would once again know where the items are located. If the information would pop up and disappear completely, it wouldn’t be glanceability because the information is not
there anymore. For instance, the user can glance at the watch’s screen of the remaining steps because it’s easy to access and it’s always there.

Insights

I have gained several insights about glanceability and peripheral. Glanceability is fast and quick information which can be understood in seconds. Otherwise, the user’s interaction would be labeled as reading or looking. To design with glanceability, the information should always there and easy to access. For instance, an electronic board displaying arrivals or departures which are very easy to access due to the huge size. Peripheral implies if there are several things at the edges of one’s vision. A peripheral display has objects in the background which the user can choose to focus on. For instance, a program downloading or an anti-virus software conducting a scan. The programs are not in the user’s main focus, thus they are in the background. I have also learned about behavioral glanceability whereas the glanceable information changes the behavior of the user. The small project this week gave me the insight that the information should always exist if it should count as glanceable.
Overall, glanceability and peripheral are important design aspects for interaction designers to be aware of. Glanceability can give the users a better comprehension as well as changing their behaviors while peripheral display can have several activities at once which the user can choose to focus on.

Module 1:2

Week 2

Monday, 13th November

We had a lecture which introduced us to the new topic. The topic for the week is “quantified self” which means lifelogging or self-logging. For instance, users logging their steps for various reasons like gaining the knowledge of how much they have walked, reaching a goal or competing with someone else. Lifelogging doesn’t have to be quantitative as it can be qualitative as well to get a better comprehension regarding a problem. Digital products and software have made our tracking simpler than ever before for the users to complete their endeavors and improve upon themselves. The initiative of the logging can either be explicit or implicit but the purpose is to aid the user (improvement is common as well as the optimization).
Collecting data through quantified self is important for interaction designers and their research. Interaction designers need to track the user’s activities to overview the responses of their interaction regarding the designers’ product or prototypes to find a solution or a problem. An observation usually takes place during a number of days.
Any kind of self-tracking regardless of the approach is an objectification. We observe different aspects of our lives and objectify them, for instance, our steps or pulse. By objectifying our aspects, we create potentials in optimisations, self-improvement, and self-experimentation which is narcissistic. We want to improve ourselves individually.
In order to choose a new topic for the week, we asked ourselves what we would log or track the most. We came up examples of hygiene, how often we talk to family and pictures/selfies in social media.
We started with the social media topic. In this example, we would log how often the users take pictures merely of themselves as well as pictures with other people. We would focus on Instagram only. We wanted to explore the question – how often do we present ourself (opinions/visual representation)? The idea was to implicitly track the user’s uploads so their actions can be presented and reflected over. The tracking activity would be visually shown for all the users with some sort of indication. We realized that this was not a good idea because it didn’t give any room for self-reflection and improvement. Users who take a lot of pictures of themselves won’t change the behavior because it’s their choice. They would neither appreciate being
labeled as narcissistic. The tracking is not explicit, more like a surveillance of the pictures. We decided not to continue with this concept because we could not argue why the concept would be accepted in practice or relevant. We had a difficult time with ideation regarding what we wanted to do. We spent too many hours brainstorming and discussing trying to create a decent concept.

Tuesday, 14th November

On Tuesday, we decided to go back to our other examples and began to work further with the contact concept. Time is an essential resource for a one week project so we rather took our previous example than creating a new one. We expanded the example to include all people the user’s’ desire to stay in touch with. Even though we knew which concept to explore the problem persisted to exist. We wondered whether we should track the last time the conversation was active or the lack of initiative by the user. We decided to track the lack of initiatives because it’s a big factor in ruined relationships. It’s an area which the user can enhance upon. The concept is based on an app which implicitly collects data of conversations through several social media and software. We want to help users to improve the contact with acquaintances by taking the initiative in the conversations more often. The app will function across several social media which will be further explained with the wireframes. The app will display the initiatives on both sides revealing which person who took the most initiative. To summarize, the intention of the concept is to make the user self-reflect by the lack of initiatives. If the user rarely took the initiative to start a conversation compared to the other person then the application will give the user a “guilt” feeling by highlighting the comparison.

wireframe3.JPG

These are wireframes of the application. The left wireframe shows the startup screen with the logo and name. The following wireframe shows the user’s contact list whereas the logo indicates that their conversation is being tracked. Displaying the logo is needed to remind the user in social media which conversations are tracked. Interacting with the plus sign allows users to add new contacts. The last wireframe is the selection of media, for example, Facebook and Skype. We made this design idea due to the users might not desire to track all the social media. For instance, Snapchat which is heavily based on sending pictures. Sending pictures might have a lower communication value for one’s mother compared to a call. The circles should contain the logos of diverse media.

wireframe4.JPG

The first wireframe is the artifact’s notification to the user. It illustrates a notification on the lock screen to remind the user to initiative contact so it won’t be a one-sided relationship. The notification tells the user detailed information of total initiatives on both sides. The user has initiated the conversation 3 times in the last month while “Mom” has initiated 20 times. If the user interacts with the notification then the application will start and display the same text. An addition to the text will be visual as well that tells the user’s latest initiative occurred. After the user has clicked on “Take initiative!” button, the wireframe to right will appear which let the user choose the media of interest for the initiative. There might be situations for specific medias so we allow the user to pick between all the tracked medias.

wireframe5.JPG

The application is tracking through social media which is visible by the logo beside their name. This is an indication to help the user to understand which person is being tracked. The logo will behave as a button that will display information if pressed. The information is glanceable information of how many times the user and the contact have initiated the conversation during the last 30 days. The latest media where the initiative occurred will also be visible. This will help the user to remember better about the latest initiative. The design will give the user a guilt feeling hopefully convincing the user to take initiative.

Wednesday, 15th November

Seminar

On Wednesday, there was a seminar based on the book “The Quantified Self” by Deborah Lupton. We discussed the chapters of the book regarding self-tracking and self. We had interesting discussions about the topics with the teacher’s involvement. Three words were mentioned by the teacher that I thought was important: power, responsibility, and subject.

The definition of power can be defined as either ordering or telling another person what to do. Thus, one has shown power to the other person. In self-tracking, the word power is associated with the user. For instance, an application logging the number of calories each day tells the user what to eat and how much. This will change the user’s behavior regarding the daily meal intakes. This is the case of my group’s concept as well. We tell the user to take the initiative in a conversation with glanceable behavioral feedback.

Responsibility is a response to a situation while irresponsible is the opposite. Using a device for self-tracking purposes as self-improvement and optimisation is a response of the user. The users have the responsibility to endeavor and achieve the goal of interest. A device is an aiding tool for the user which makes the user more organized. Therefore, there is a better chance to achieve the goal quicker. In another hand, the device helps the user control the progress by visually keeping the track of the data. This can lead to the user being more motivated at achieving a goal or do another kind of interaction.
The subject in our concept is to help the user enhance conversations in all kind of relationships. To highlight the importance of a relationship shouldn’t be one-sided that both parts should take initiative. It’s an important subject to discuss.

Thursday, 16th November

During Thursday, we made the video prototype for the presentation on Friday. We did not make any changes towards the concept.

The video prototype can be found here:
https://drive.google.com/open?id=1Q2zYtupvtfbRCGJ-Uo2R_g4TRdwhtL6-

Friday, 17th November

Presentations

During the presentations, we got the opportunity see other groups presentations and concepts. The other groups had some interesting concepts as well such as tracking bullying and tracking laugh. It felt informative to watch their videos. One group with the concept of managing work time mostly utilize their video as their presentation. It explained their ideas very well. We spent most of the time of our presentation talking about the concept. We also showed our video which also explained the concept. It felt like we were over explaining it to an extent.

The presentation overall went really good and we got some good feedback. There is a risk for passive aggression if both sides got a notification. The person who takes the initiative more compared to the other part might be irritated by always taking the initiative. The person might realize the fact and refuse to further initiative contact. Is
this a reaction because of the display numbers? It might be the problem If the person sees a huge difference in numbers. A simple text which says that the person has taken the initiative more than the other part might reduce the aggression. For the reason that the person does not see the difference anymore. There is an important aspect regarding the value of different communications. A text message might not be as valuable as a call. Sending a picture through Snapchat is not good for a conversation like a text message. How could we design a system that values different communications? We could implement an options menu where the persons in the conversation could select which way of communication they prefer. This information could be display at the notification as well. Thus, the preferred approach of communication will be more valuable.

Insights

Self-tracking is an important feedback for interaction designers, not just for collecting data but also being able to change a user’s behavior. Quantified self is to improve oneself to become a better person or optimize the situation. Tracking can be more than just self-logging without any digital means such as a farmer counting animals of a herd with a pencil. However, the digital technology helps us to control and organize the logging to reach our goals. Working with self-tracking has been really interesting and enjoyable. It’s a topic that raises a lot of questions and even provokes a lot of thoughts, for instance, how much of our lives would we allow to be tracked?
Self as a topic is a tremendously broad which is important to be familiar with. To know the reasons behind the people’s interactions with the products. Self-tracking is definitely something I will continue to discuss and explore in my further projects.

Module 1:3

Week 3

Monday, 20th November

The topic of the third week is ubiquitous computing(ubicomp). Ubicomp is a concept of computer science where computing appears anytime and everywhere. Ubicomp is also known as pervasive computing, ambient intelligence and “everyware.”
The technologies supporting ubiquitous computing are the Internet, artificial intelligence, advanced middleware, operating system, mobile code, sensors, microprocessors, new I/O and user interfaces, networks, mobile protocols, location and positioning and new materials.

The brief for the week is to come up with new affordances for a multi-device voice/sound-based interaction system while focusing on one of the characteristics as following: Learnability, exploration, breakdowns as well as errors & deviations and social & cultural experiences.

We had a lecture where the teacher explained all of the characteristics as well as ubicomp itself. Afterwards, my group started to brainstorm ideas which lead to a creation of a decent concept. A concept that focuses on learnability as well as exploration and error. The concept is a device which works as expected while being actively used. However, when the device is not being frequently used by the user in a long time, it becomes lazy and stupid. It will need attention sometimes like a flower. Imagine a voice assistant, for instance, Siri or Google Assistant. Their behavior would change along with the inactivity. They would become moody and tell the users to search for an answer themselves. We were discussing whether to choose music or an assistant regarding this concept. Music, in this case, would become lower quality and give the user a horrible music experience. Music is too vague for the concept, we liked the idea of an assistant better because of the voice of the assistant as well as the extended amount of interactions an assistant provides. We expanded upon the idea by the thought of making the assistant smarter by interacting and offers further features. For example, the first stage the assistant will play the desired song for the user. After some time being active, the assistant can suggest songs the user would listen to. It learns information about the user as long as the user interacts with it. It exhorts the user to interact for two reasons. The first reason is the assistant’s
ability to continuously gain new features to benefit the user by being highly active. The second reason is the essential activity to avoid the assistant becoming lazy and moody. We want to explore the question – What if a smart A.I. shows human attributes? We already behave Google Search as a best friend what if we take that further?
It becomes like a muscle for the A.I. Just like learning a new language, training a new skill or do physical activities. If you don’t frequently pursue these activities, your skills will become worse after some time. This is a concept that doesn’t solve any problems for the users’ merely it solves a question for designers – “How do we keep the user engage in the product frequently?” The design of the idea is critical which raises questions rather than answering them.

The new affordances with this concept are that the assistant finds enhanced and newer ways to ask and understand the user. The user won’t significantly notice these changes due to they occur in the background. The other new affordance is the assistant need to always be active is essential to keep high quality otherwise it will become worse. The affordance will become natural for the user as one has learned to treat it more as a human being.

Tuesday, 21th November

On Tuesday, we had a lot of problems considering the concepts. One group member was critical towards the current concept. During the whole day, our group either tried to improve further upon the concept or create a completely new. There would be an interpret problem regarding the lower quality sound. The user’s notion of the situation could be misunderstood which eventually leads to the user returning the product. An improvement would be if the assistant would articulate to the user that it has been inactive for a long time. Hence, the quality becomes worse. Another approach for improvement is the ecological approach which the user can easily understand the interaction by the design of the product. An implementation of this approach could be a visible measuring indicator on the assistant whereas the left side illustrates inactivity and the right side activity (a sad face and a happy face could work as well). The needle would indicate the assistant current “mood” as shown in the picture below. The user would comprehend the “mood” of the assistant without interacting with it. Thus, the user would understand the situations that would occur, for example, the music scenario more clearly. By connecting the worse quality with the “mood”.

picture.JPG

The constant discussion of which concept we should pick or improve upon took many valuable hours. We got completely stuck which was a significant adverse impact on our project than last week. Lastly, we went back to the beginning and created a new concept. A concept where the user interacts with the diverse system by gestures. For instance, clap to turn on a lamp. However, the system doesn’t respond based on the gestures but the sounds of the gestures.
I thought it would have been better to improve the concept we already had instead of starting over. Now we have to further discuss the concept as well as finding new affordances. This will be very stressful. The stress can be a crucial factor to reduce the quality of our work.

Wednesday, 22th November

Seminar

On Wednesday, we had a seminar regarding following three texts –
1. The computer for the 21st century. Mobile Computing and Communications by Weiser, M.

2. Seamful interweaving: heterogeneity in the theory and design of interactive systems. by Chalmers, M., & Galani, A.

3. Technology affordances. by Gaver, W. W.

In our groups, we discussed and selected keywords of the texts which we further discussed with the whole class at the seminar.

Our keyword for the first text by Weiser was “adapt”. Weiser tries to comprehend how computing in the 21st century will be like. He mentioned ubiquitous computing and our systems working in the background. He wrote at the end of his paper that machines are made to fit into the human environment. Hence, we chose the word to adapt. Machines adapt to our everyday lives regarding talking, working, safety and much
more. When technology fades into the background it becomes ready-to-hand which be used for our main focus. The technology while adapted can be used anytime and anywhere to either be used or display useful information for the user.
Our keywords for the second text by Chalmers, M., & Galani, A are unified-media and peers. Treating media as a single entity connected to a larger web of digital media leads to a better understanding of the process of interweaving, accommodation, and appropriation for the media. Designers treating technology as a non-isolated media to create a unified seamless experience across users. This can achieve a more informative design through connecting the experience. By viewing different digital medias as peers, rather than treating any space as the primary tool of focus.

Our keywords for the third text by Gaver, W. W. are perspective, experience, and presentation. The experience users have with the product which lead to their The interpretation of the presentation of an object is affected by a person’s perspective.
Medical equipment is self-explanatory for a specialist in area. A door handle is self-explanatory as well but for all kind of users. The users understands by a glance that they need to grasp the handle to open the door. Products with cognitive approaches need experience/presentation in order to comprehend while the ecological approach can be comprehended just by the design. The ecological approach is more valuable for than cognitive for everyday people due to it’s easy to understand.

Designers should think about the cultural differences around the world while designing cognitive and ecological approaches. The diverse cultural perspectives might interpret the design differently. They also need to start with the human conditions first and what affordances users need.

The cognitive approach is about mastering a skill as well. The product might give users a struggle at first. However, after spending some time to learn the product, users will eventually have a better understanding of it and a smoother experience. The product will become ready-to-hand. It won’t no longer be about the product anymore but rather how and why users are interacting with it. For instance, a computer mouse.
In the seminar, we discussed how all affordances are new to one point depending on the experience. An old affordance for a user could be a new affordance for another user. Everything that matters with affordances is familiarity. When designers defamiliarize an affordance, it becomes a new one which we haven’t seen before.

Not all the affordances are perceptible. Affordances can also be hidden and false. For instance, a button that doesn’t work thus it doesn’t have any function. Therefore, it’s a false affordance. A hidden affordance is an affordance that is not obvious to the users. There are possibilities for action but they aren’t perceived by the user. For example, a stone being used to hit a nail instead of a hammer.

After the seminar, my group worked more on the concept as well as creating a video prototype. The characteristic aspect that we explored was exploration. The user explore the inputs with preset sounds. Users can interact with the device through their voice whereas the users need to say the system of interest, for example the TV. When the product knows the connected system, the users need to make a sound with gestures. The system will read these sounds as an input and provide feedback such as turn on and off. The new affordance in the concept is how we defamiliarize the way we interact with our systems at home. We are familiar to interact with the systems physically by using our hands which is now changed to the sounds of our gestures. The design is a cognitive approach, people won’t understand the device just by the design but with experience. Any affordance that don’t succeed to help the user to understand the product is a bad design.

Here is the link to the video:

https://drive.google.com/drive/u/0/folders/1YKIhk0DPJF_y6lWZfns90j9xBx1xN4QQ

Friday, 24th November

Presentations

On Friday, it was finally time to present the concept. Just like the previous week, the class had some interesting concepts. For example, one group had a device which turns on the lamps based on the force of the front door. If a person put a decent amount of force to shut the door, it will make a bang sound which turns on the lamps in the living room. If there is no force, the door won’t make any sound and hence the lamps won’t turn on. This is a good explicit design with good affordances. We are familiar to open and close a door gently or with force based on the situation. The idea won’t cause any confusion for the users.

We got some good feedback on our concept. What would happen if the user clap while watching a football match? There should be options to prevent implicit interactions to occur in the home. Another feedback was that there is an “owner” of the interactions at home. The visitors would need to always interpret the interactions. Some visitors might interpret the interactions in different ways. There should be universal gestures towards the system, so people don’t have to interpret.

To next week, our group really need to improve the self-criticism. Valuable time will be lost if we got stuck in time-consuming discussions about various concepts. We would’ve articulated our concept further if we put more time into discussing our chosen concept.

Insights

Affordances are essential for interaction designers. Creating with affordances in mind, products would become more understandable for users by glancing on the products. In other words, affordances help users to interpret the design correctly and therefore use it correctly. Otherwise, it’s not a good design. This knowledge is useful to design for everyday people which interaction designers do. The technology becomes more ubicomp in the sense of smarthomes. Even when we aren’t home the computers are always accessible for us. Anytime and anywhere in forms of smartphones, smartwatches and laptops. Therefore, computers become tools for us in our everyday lives working in the background. First, we learn the tools then they become additions of ourselves. These additions can aid us with creation, ideation and ease throughout our days.

Week 10: Module 3 – Part 3

Monday, 30th October

On Monday, we had the show’n’tell session and we met one hour earlier just to prepare. We decided to talk about all of our examples but only show three of them to the other groups. We had a powerpoint with gifs of our examples ready to be shown and we discussed what we were going to say. We also had a “wizard of oz” gif for our idea which we could not implement in the temperature example.

The show’n’tell went really good. Our peers had some interesting examples with the servo including a radio and a curtain. We were a lot more prepared than the previous module. We used the feedback and the reflections which enhanced this module. Some of the groups had their examples ready to be used, but we felt like that was not necessary for us. It was better to demonstrate with gifs due to the short time. They understood our examples better than the previous module because of the gifs. The feedback was good for our examples, they liked the effort that we have put into the examples and they liked our idea regarding the input feature which would select a city of the user’s input by the weather temperature.

Reflection

This time, we were more certain of what we were going to work with. We came to the decision to work with movement pretty fast. We were thinking more from the user’s perspective and asked ourselves questions. For instance, would we use this movement with the product ourselves?  Our user-testing with our peers also helped us gain new knowledge about the interactions which would be difficult to figure out by ourselves. For example, the twist with the game example. The new arm of the box became a new movement with the game which was the turning movement.

We were more certain on what we wanted to achieve with this module. We wanted to explore and get insights on why we do not use these movements that often in our products today. I think that we succeed with these newly gained insights. The lever movement is not practical because of the size and the stationary state. It does not have the mobility as a remote controller even though the movement itself is an interaction that feels natural (it is still not enough to be practical). The turning movement worked good for the game but the box must face in a specific direction so the player won’t be confused while playing. I and Elias really liked this topic and that made it also easier for us to get engage with the examples and ideas. Movement is how we interact with our products and different movements for the same product might lead to better interactions.

Final reflection on the course

In the intro of the course, I wrote expectations for this class and myself. These were my expectations:

  • Be more aware of the different possibilities of interactions in artifacts and when to implement them.
  • To learn about the depths of interaction and use it in a practical manner.
  •  To give better critique towards the design for broader improvement opportunities.

I have definitely become more aware of interactions in products in this course. The interaction attributes taught me that the interactions also evoke feelings for the user. For instance, the example I mentioned with the pin code. “A pin code used for shopping purposes evokes a safety feeling for the user due to a code no one else knows.” Knowing the attributes of the interactions make it easier for me as an interaction designer to know when to implement them.

The implicit interaction and the space topics made me think about interactions in different ways. Learning about implicit and explicit interactions has changed the way I see the initiating party of the interaction as well as the visibility of the data. These are new in-depth thoughts on interactions for me. However, I will certainly continue to use these insights into my career and future projects. I have also begun to see space from new perspectives regarding how the user moves, locates and uses the space. As an interaction designer, it is important to think about the topic space overall and make the space as beneficial as possible for the user. The access to this space has to be smooth and certain without any confusions. I definitely used the feedback we got from the show’n’tell sessions to reflect and improve the way I work. The new insights allow me to not only critique my own work but also others’ work to a broader extent. I did not criticize that much at sessions though, I will try to improve the criticism in the future.

Overall, I think the course was really good and it had an important impact on me as an interaction designer. I think about interactions in new ways than I did before. It will impact the way I design so the interactions will have a broader meaning behind them as well as being more efficient.

Week 9: Module 3 – Part 2

Monday, 23th October

On Monday, we started to think of new examples to create. We began to talk about how the system can use movement to display weather for the user. It was an idea already from last week that we now decided to explore further. We will continue to use the box and the arrow that we made last week. The arrow will point at the right amount of degrees of the current weather. We will add a 180-degree disc display with degrees printed on it. It will display the degrees without any digital screens. A device showing the temperature is not unique but we think it will be a fun experiment for the servo. Towards the end of the day, we added the disc onto the box.

We also began to create a second example. We wondered how well the movement from the previous week would fit for entertainment purposes. In this case, playful games. We wanted to combine the movement with a game so the user controls the character in the game by interacting with the arrow. In other words, the box and arrow will operate like a video game controller and replace the keyboard. Thanks to W3Schools, we found a good tutorial on basic game mechanics.  The game is made with Javascript, HTML, and CSS. We began to work on the game and implement the servo as a controller.

A link to the tutorial can be found here.

Tuesday, 24th October

On Tuesday, we continued to work with the examples from Monday. In the weather example, we created an HTML webpage with an input field. In this input field, one can write any city in the world and submit it. The page will provide feedback on the current weather of that location. The information of the cities and the forecasts are from Yahoo Weather. There were some problems with the arrow indicator,  it did not want to show the accurate temperature. For instance, if it is hot in a location then the arrow shows low degrees.

A factor to this problem was also that we did wrong calculations while making the disc. The space between the degrees is uneven.

ezgif.com-video-to-gif (1).gif

The video shows how the user can submit any city in the world and the city’s temperature will be displayed. However, the temperature should not be displayed on the screen.

We put more effort into fixing the problem with the arrow’s accuracy to be as good as possible. After working with the code, the arrow proves to have better accuracy of the temperature but it is still not 100% accurate. It mostly gives a hint about the temperature.

ezgif.com-video-to-gif (4).gif

In the video, we demonstrate the example. I submit Dubai at first and the arrow points at 35 which is pretty accurate. The next city I submit is Oslo which the arrow shows 0 due to uneven space between the degrees. However, the arrow was inaccurate because the screen displayed around 4 degrees. Although it gives a hint for the user that the degrees are low. However, the user would probably prefer having accurate temperatures displayed.

Thanks to peers’ feedback, we got another idea that would be interesting with the example. The users would control the arrow instead of the system and choose a temperature of their choice. Thus, cities within the chosen temperature would appear on the screen for the user. This function would be unique for the example which we have not seen before. We tried to implement this function into the example but we did not manage to do it.

We modified the code examples from W3Schools for our game. We got the speed of the character to react with the servo. The game is similar to the mobile game Flappy Bird which the walls spawn outside the screen and move towards the player. However, the walls have holes and the size of these holes are randomized. The player needs to go through the holes and avoid colliding with the walls. If the player collides then the game is over. The player controls the y-axis of the character. I worked and experimented a lot with the character’s speed movement. The speed of the character was increasing and decreasing if the arrow was turned more than 100 degrees and the latter less than 100 degrees. The different speed values made it feel like the character was on an ice rink. Therefore, the game became extremely difficult with the movement. After spending some time with speed adjustments, the controls finally started to feel right.  The speed will now set to the value 1 whereas the arrow is turned more than 120 degrees, forcing the player to put effort into turning the arrow. The character will move up in the y-axis during this interaction. Between 80 and 120 degrees, the value will be set at 0 which does not make the character to move. Lastly below 80 degrees, the value will be set at -1 which makes the character to move down. Thus, the player will also be forced to put some effort turning the arrow down. Turning the arrow up and down felt surprisingly good and natural as a controller for the character. There was a little bit of delay with the turning but that did not stop the game to feel entertaining.

ezgif.com-video-to-gif (2).gif

 

The video shows me interacting with the game. I turn the arrow both up and down to move the character and it worked pretty well. The arrow should be faced as shown in the video. If the arrow is faced in another way then it might be confusing. Thus, it might be hard to grasp the connection between up and down in the game, since you’re turning the arrow right or left.

I mentioned in the previous module that we should have let our peers try our examples. So we let other classmates play our game and they really liked the arrow as the controller. One of our classmates mentioned that due to the delay it felt like steering a boat.

We came up with a good twist to the game for our peers. One of the classmates was looking at the game while the other classmate was behind the laptop without any knowledge when to move and which direction. The classmate watching the game had to give information to the other person if the character has to move up or down. The interaction they have with each other gives the game a new perspective. They have to collaborate with each other to survive as long as possible. I thought this was a really interesting insight, it changes the game dramatically and makes it more difficult. Now it becomes a cooperative game of fast communication. These interactions would not be possible with a keyboard.

ezgif.com-video-to-gif (3).gif

The video showcases our classmates trying to get through the walls with communication. It gives the game a whole new perspective and new interactions.

Thursday, 26th October

On Thursday, we were documenting the examples as well as videos. We did not continue to work on the examples. We used most of the day to further tinker our thoughts and write in the journal.

In these past weeks, we have explored how movement can be beneficial for the user, both how the user interacts with movement and how the system does it. We have come up with some interesting examples and ideas, as well as important insights regarding the topic. I will further talk about these insights next week.

The idea that we got regarding the temperature example could be beneficial for tourists regardless if you prefer warm or cold weather. It is an interesting approach and it could help a lot of tourists to decide a location. It would save people’s time to research and find cities. The cities would be also displayed in a list thus the user would have their options collected in the same place. There are possibilities to develop the idea further by adding filters for which continents or a popularity indication so the user can see how popular a location is. The interaction with movement is that the users will carefully point the arrow at the degrees they desire. The movement might be unnecessary in this case for some people who would rather prefer digital input. Another idea that would make the movement of the arrow valuable is if the arrow indicates the weather’s current condition. For instance, instead of degrees, there could be a sunny side, cloudy, rain or snow and the arrow would point to the current type of weather. I think the first thing one wants to know after waking up is the weather so one can plan the day.

The movement we were focusing on last week with the lever is not quite the same as the interaction with our game. We had to make the lever smaller due to the weight. The player can use the same movement but it won’t be as comfortable with the arrow because the tip of the arrow is pointy. Therefore, the player needs to hold a bit lower than the lever. The most natural movement of the game is the turning movement. A movement we usually use while turning a wheel. For instance, some microwaves have a wheel to set the timer or a radio which uses a wheel to raise or lower the volume. The users can turn the wheel constantly till they reach the desired amount with these products. However, in the game, the player cannot turn the arrow constantly but rather pay attention to the walls and turn the arrow accordingly to survive the situation.

 

Week 8: Module 3 – Part 1

Monday, 16th October

On Monday, the third and final module began which gave us the freedom to choose a topic. I got the same group mate as the previous module and the material this time is the Arduino with the servo component. The servo component to an Arduino is an engine that can be turned in several positions. It has also an arm which can be turned in 180 degrees. We have never worked with the servo before so it will be interesting to experiment with it and see the final results.

On Monday, we also had a lecture about the third module as well as information about the essay at the end of the class. The teacher went through his code samples for the Arduino and the ws-serial-bridge. “WS” stands for WebSocket which is according to Matt West from Treehouse Island, Inc, a connection between a client and the server that send data to each other at any time.  The code sample uses a WebSocket to send data from an input field to the Arduino. The data from the input field can change the position of a servo connected to the Arduino. We managed to use the WebSocket through a JavasScript built open source framework called node.js which we installed. We installed it through the command prompt and connected it to the same port as the Arduino was using. Thanks to this we could receive Arduino data through the command prompt instead of the serial monitor. This allowed us to open a node.js server where we can input data through the input field to the Arduino in order to interact with the Arduino and the servo. This is the first time in the course that we work with physical materials, all the other materials in the previous modules were digital. We have only one servo to work with though which response to analog feedback.

Tuesday, 17th October

On Tuesday, we started to experiment with the servo and try to make the uploaded code to work with the node.js and the server. We had a lot of problems with the servo to work. We spend the whole day trying to figure it out. The servo did not calibrate which gives out a huge inaccurate amount of numbers. The code displayed a box with an arrow inside in which indicated the direction of the servo. It displayed the position of the servo as well.

While the topic for this module is our own choice, we decided to have movement as the topic. The same topic as the exercise in the first week of the course. The movement will be really interesting in this module because of the Arduino materials which allow us to create a different kind of movements than swiping on the screen eg. How the user interacts with a product can be explored from a new perspective. We could not get the servo to work probably during Tuesday. However, Elias figured out the problem the following day and helped me to get it to work.

Friday, 20th October

On Friday, we began to create some examples. With the movement topic is mind, we thought about a specific movement which is usually used while interacting with levers. The movement when one pulls the lever towards oneself or push it further which both are explicit interactions. It is a movement we think is natural for humans, based on how we are shaped with our hands and arms. It is a movement we usually use while we pump water manually from a water pump. It is also used by slot machines. We do not see this movement with our digital products and why is that? Is it because of the movement does not fit with our products or do we rather want to interact with the products through the touchscreen? On Wednesday, Elias also implemented a video in the code which he could change the volume of the video through the servo. We continued with this code on Friday and we built a box with a lever with a laser machine.

22689824_1422934487824184_1075114581_o.jpg

Here is the result of the laser. The box has a roof and three walls, the lever is attached to the right wall.

22711774_1422934574490842_1765203108_o.jpg

We added the servo inside of the box so the lever can easily be connected to the servo.

22690416_1422934641157502_1023472547_o.jpg

The picture shows it more clear how we attached the lever.

22547174_10212053234420809_6653913852092088320_n.gif

In the video, it shows how I do the interaction while I am pulling the lever. I can also do the interaction while pushing the lever further. The interaction affects the sound of the video.

22525918_10212053234340807_8134695928666783744_n.gif

A video with the result at the focus, the sound of the video becomes lower when I pull the lever towards me while it raises when I push the lever. It actually works while the video is playing so the user can change the sound to the desired level through an arm movement. This is possible due to the possibility to turn the lever to 180 degrees.

When I tried the lever I thought that the interaction takes a lot of space of my nearby surroundings. Lower the volume with a mouse or a phone does not take the same amount of space. Based on this experience the movement would feel better if the connected product was further away. The movement felt clunky and unnecessary while the laptop was very close to me. I asked myself the question: “Why would I use the lever if I can control the volume faster by interacting with the touchpad?” I came to think about the distance between the user and a TV. The box would fit surprisingly good on the side of an armchair or couch, then the user could change the volume of the TV by interacting with the lever. The user is nowhere near the TV so space won’t be a problem. Why would the user use a lever with the movement instead of a remote control? The lever will give the user’s desired audio level at a higher rate than a remote control. The remote control contains a button which the user needs to press and hold raise the volume and another button to lower it. If the user wants the audio go from low level to high level then the user needs to interact with the button longer. After the user has conducted the movement with the lever, the user has already reached the highest volume. However, the movement also comes with problems such as mobility. A remote control has a great mobility because it is not connected or attached to any other material. The remote can exist anywhere in the room. Thus, a lever has to be attached to a couch or an armchair which removes the purpose of mobility. A remote control has several functions, for instance, changing a channel or turning off the TV.

Nonetheless, the movement feels satisfying while interacting with the audio and I felt that I had more control over the audio than using digital approaches. The feel of control, in a comparison between physical and digital, is an interesting thought. Why did I feel I had more control with a physical arm movement than digital? In another case, which the user will feel more control is while interacting with a stove. In a digital stove, the user press and hold the buttons on the screen to start the stove or change the heat. In a physical stove, the user holds and turns the stove knobs to change the heat. In afterthought, I have used both these kinds of stoves and I feel that I have more control with the physical than the digital in this case as well. I think that the uncertainty is a big factor, the user might feel that the digital stove may not be trigger all the time when the user interacts with it. The user can have messy or watery fingers which fingers usually become while cooking in the kitchen. The messy fingers can make it difficult to interact with the stove and make the user uncertain if the stove will respond. A remote control only gives visual or audio feedback to the user which makes the user understand that it is working. However, the lever’s position indicates also the audio level without any digital visual or audio feedback. The feel of interacting with an object physically is another big factor. While interacting with the lever, I got this constant feeling of feedback because the lever was moving with my interaction, therefore, the position of my arm changed during the movement. Bringing up the stove case again, the stove knobs will change direction while the user turns them. However, the user’s hand will change the direction as well at the same time which the user can interpret that there is nothing wrong with the knob and that the heat will now change. These factors are the reasons why interacting with a physical object feels more controlling for the users.

We worked with another example that we experiment with implicitly of the movement. We asked ourselves: “How can the system use this movement without human interaction?” Like I mentioned earlier the direction of lever shows feedback for the user as well so how can the system use this indicator as feedback for the user? We thought about probably the most common interaction and communication between analog and digital which is a watch or a timer.

We tried to modify the code to make a timer out of the movement. We saw that the arm of the box was not following the code, it was a bit inaccurate. The arm was slower than the code. We realized that the current arm was too heavy. We changed the arm to an arrow made of carbon. It definitely solved the problem that we had. We made a timer which ticks every second and the arrow indicates how much time has passed. The system works really well with the movement. We thought about several ideas which the system can use this indicator as feedback. A weather system with several sections which show different weathers like rain, cloudy and sunny eg. The arm would point at the accurate weather for the current day, letting the user know what the weather is outside. Another similar idea could replace weather with days instead. The arm would tell the user which day it is. There is a lot of variety of how the arm indicator could be a good help for the user. It won’t be possible to create a regular watch with this movement. A watch moves in 360 degrees while the servo can only be turned in 180 degrees.

ezgif.com-video-to-gif.gif

The video shows the second example that we did. The system uses the movement to tick seconds and the arrow indicates for the user how many seconds that is left.

Week 7: Module 2 – Part 3

Monday, 9th October

On Monday, we did not continue to work with the examples nor began new ones. Instead, we began to prepare for the Show’n’Tell on Thursday. We wanted to talk about four of our examples. The introduction will be about what theme we started to focus on and shortly after illustrate the desktop clean example. We divided the presentation into different parts so both of us would speak. The introduction is Elias’s part, two of the examples are my part and the last example belongs to him again. I will talk about the hover example and the zoom example and explain them both. Lastly, Elias will talk about the stack example and our main insights.

Thursday, 12th October

On Thursday it was the Show’n’Tell, it was finally the time to show our examples and talk about them as well as get feedback. We were in the second session out of three with five other groups. Some groups had really good examples. For instance, one group had an example about the topic “sense of where” in space. Their example was on the phone and by tilting the phone, it changed the pages which had different background colors. They had a visible overview of their current location in the space which reminded me a lot of the small dots on smartphones which indicates where the user is. I thought the example was very well-made. I noticed that another group used sketches which no other groups did. I think it is really effective to use sketches to explain and show the thought process for the design. It is a fast and simple approach which is needed when there is no time to code the function. Whenever we need to show a function which we won’t have time to implement, we should definitely consider sketching. The sketches do not need to be pretty either.

Reflection

We got a lot of feedback for our examples after our presentation. Some people were confused about the zoom desktop example because they did not understand the interaction. I think a factor to this confusion was that it was difficult to show everyone the example. I wish we had gifs of the examples ready for the presentation. It was really difficult to show and interact with the example at the same time. We had to turn our bodies around to see the laptop screen while interacting with the example. That might have blocked some of the classmate’s view besides us. With gifs, we could also show the example several times without putting any effort into it. Some groups had gifs and it worked pretty well. It would have been easier for us to show the examples this way.

In a previous blog post, I mentioned that in the zoom example the desktops become like layers on top of each other. The teacher said that the desktops cannot be in layers because of the zooming. I can agree to that because space either expands or shrinks by zooming. I should have explained this better in the blog post. The zoom interaction makes it look like the desktops are layers due to the appearance of the other desktop when the user zooms in. It only gives the illusion that they are layers when in reality the zooming expands the desktop so the previous desktop disappears as the next one appears. Maybe the illusion would be more clear if the example had more than two desktops. I realized that a zoom interaction might not have been a good choice for this matter. It would be frustrating for the user to always use this interaction merely at the desktop. An example of switching desktops smoothly is the MacBook which the user can change desktops anytime by the touchpad. We should have taken more inspiration from Macbook regarding this problem. The transition by zooming in and out between the desktops does not look aesthetic either. I can imagine if there were a lot of icons which would resize by the zooming and cover the whole screen as well as become extremely pixelated. A group mentioned that computers can already create several desktops which are true. However, the new desktops still have the same icons and programs like the standard desktop but with no active programs. It does not clear nor make any new space for other programs or files. We also got feedback on the stack example. The teacher wondered how the stack would work with different stacks and what would happen if there were files in the way for the opened stack. I think that the iPhone has a good solution to this problem which blurs the rest of the icons and puts the folder at the focus. The user becomes completely focused on the folder due to the blur effect. However, it does not let the user add any files to the folder while it is open. We never thought a lot about how the stacks would work with different stacks but we would definitely use the iPhone solution if we furthered develop the example. The smartphones and tablets have clever solutions to this problem while the computers do not.

I think that I and Elias should have been more determined what we wanted to work with. We both agreed on the “making space” theme and that we wanted to work with desktops but we could have narrowed it down a bit more. The topic of space is tremendously broad thus a specific focus is required to not get lost. What kind of approach to cleaning space would we want to work with? Putting the space under a rug-cleaning or clean the space completely? That is something we could have done better if we did it another time. Be more determined about our work so we can give a better explanation why we choose like we did and how it benefits the user. The teacher mentioned that it is more important to articulate the example than to represent it. I think we would improve to articulate the example if we ask ourselves the questions I mentioned. We had a great discussion in the first week about all the themes of space and thoroughly explained them to get a comprehension of the topic which is needed to solve a problem. We were open-minded about our ideas which is important for the creativity and our project. We should have done user-testing on family members or friends with the examples we conducted. There is always a risk that we are too focused on a solution or problem so we miss the bigger picture, new minds and eyes could help us realize the errors and space for improvements.

Summary

Overall I think that the topic of human space is a really interesting and important topic for interaction designers. How we navigate and move inside space without losing the track of sense where we are, is an important aspect for the interaction designers to focus on. Thus the users do not react to the products with confusion and frustration but rather as a product which is simple to understand and use. Before I thought space was mostly personal for the user to decorate to achieve the “home-feeling” but I have gained an enhanced comprehension of the topic. Space is more than just personalization but also other factors which improve the user’s experience with the product. Including fluent movements within the space as well as the experiences of the space. The movement in space shall be smooth and non-difficult in order to not disturb the user’s actions. I will also consider the user’s sense of where within space into future projects. Otherwise, the user might get lost in space which leads to a bad experience.  A familiar space is a space that we have experienced and our reactions based on the experience lead us to either like it or dislike it. We get a sense of where things are which improves every time we experience it. Windows-users usually stick with Windows operative systems than the Apple computers because they are more experienced with Windows. They know how to navigate through it and vice versa with Apple-users.

Interaction designers should always aspire to think of different approaches to make the space most beneficial for the user. Even though space itself is a huge topic, it should be narrowed down to questions that the designers should ask themselves. What do we want to achieve with space and why? How will it benefit the user? How does it fit with our design? Discussing these questions will make the work significantly more articulate and therefore simple for the designers to explain and illustrate their work for stakeholders and users.

 

 

 

Week 6: Module 2 – Part 2

Monday, 2nd October

The Inactive Example 

On Monday, I continued to work with the inactive icons example which I improved to an extent. I changed the interaction so the user has to press the icon for two seconds till it becomes inactive. I wanted to make the example more realistic as if it was a real function for an operative system. The previous interaction “double-click” is the same interaction as the users use to open files or start programs in the current operative systems. Thus, double-clicking to remove programs would not work because it already fills an important purpose. Pressing and holding an icon does not have any purpose and that is why I chose to implement the interaction within the example. I was using the example as if it was a real desktop and it felt better than the previous iteration. The inactive icon becomes immobile with less priority than an active icon. The user can move the icons without mistakenly moving the undesired icons. However, this can also lead to problems if the user put icons on top of each other. It will be difficult for the user to know which icon is under the visible icon as well as how many they are beneath it. The reason why this problem exists is that the desktop is two dimensional. Therefore, we can only see the top of the desktop. If the desktop were designed in 3D for example, the user could rotate the view and see the covered icons. This situation can be compared to regular desktops at workplaces that have the documents in a stack of folders with papers on top. One won’t be able to see what the folders contain unless they are being moved and open. Nonetheless, we can see how many folders that have been stacked. There are different approaches to solve this issue. One approach could be if the user hovers the mouse over the overlapped icons or folders so the user gets visual feedback on what kind of objects are inside the stack. The Windows 10 taskbar is a good example where the user can hover over the program and see what it contains. For instance, if the user has several Google Chrome windows open then the user can see all of them and what they contain by hovering over the icon.

example4.gif

In the video above, I demonstrate the example. The pointer of the mouse determines which of the active icons should have priority. This will make it easier for the user to interact with the icon of their interested instead of moving the other icons by accident. I hold and press the Google Chrome icon for two seconds to make it inactive which gives the icon a transparent appearance. When I double-click the inactive icon, it ends up in the recycle bin. With the Skype icon, I demonstrate the right-click interaction which makes the inactive icon active again. As the previous example from last week stated, it was really interesting and informative to try out the interactions. The movement of the icons is done by the “pan” interaction from hammerjs.

According to Bollnow, space has a natural structure with different parts to explain the space. These parts are referred to as directions like up, down, front, behind, right and left. Bollnow referring to the Greek philosopher Aristotle which states that these definitions exist because of nature. (1963, 29) This natural structure can be connected with the inactive example we made. The inactive icon becomes a less priority icon and stays in the shadow, behind every other icon. This means the natural structure does not only exist in the physical world but also in the digital world. However, these worlds reacting on our interactions are the same because of the similar nature structure. A book under a pile of books gets less attention than the other visible books just like the example. A book which the reader turns the page to the left has the same meaning as the reader swiping to the left on the tablet to turn the page.

Tuesday, 3rd October

The Hovering Example

On Tuesday, we continued to tinker and work on iterations. We created a fast example trying out the hovering interaction. The user can display a sidebar with the most recently used applications by hoving the left side of the screen. The original idea was also to include a sidebar to the right which the user could have their important files and programs, but we never implemented it. The sidebar will give the user a fast access to the applications and a cleaner desktop because the user won’t need the icons on the desktop as there is a faster route to access them.

example5.gif

The video shows how a user could reach the most recently used programs within a sidebar on the left side of the screen. As long as the pointer of the mouse is at the sidebar it won’t disappear. A quick access to the most recently used applications reminds me of smartphones which provide this kind of fast access. If the user would have a lot of programs taking a lot of space of the desktop, the user will easily find the most used ones with the sidebar. It also reminds me of the taskbar of Windows which the user can add icons to access the programs quicker.

We spoke to our teacher Clint and showed him the progress so far. He said that we were just touching the surface of the theme and that we should not be so locked on the icons and programs. We should make our examples more abstract than concrete. For example, the folder changes color while the amount of data in the folder increases. He also mentioned “BumpTop” which is a 3D desktop with physics. With BumpTop, the user can organize the programs in a different way and also see details of the stack of files. Bumptop really gives that “office” vibe due to the possibility for the user to have pictures on the wall. The space as well as the freedom BumpTop offers a broad variety of personalizations for the user.

Thursday, 5th October

The Zoom Example

On Thursday we continued to make more examples, this time we had our teacher’s feedback in mind. We made an example with inspiration from Google Maps specifically the zoom in and out mechanic which our teacher also mentioned. In Google Maps, the information of a location will be visible when the user has zoomed in into that location. The information will disappear when the user zooms out of the location but the information will still be there. However, other information within the new view will be displayed. In the example, the user can have infinite of desktops which creates an infinite amount of space. To reach these desktops, the user needs to zoom in on the current desktop which will eventually disappear and makes a place to the other desktop. Windows 10 has a similar feature where the user can have several desktops but it merely regards active opened programs and files while in the example the user extends the space for the whole desktop The example has only two desktops currently. However, I can imagine if the example would contain infinite space of desktops. Thus, the desktops would be more organizable and easy to access. Let’s say there are two desktops. One could contain all work or study-related materials while the other desktop could store their entertainment programs. The desktops work like layers that are on top of each other.

example6.gif

The video demonstrates the zoom example by making space for a new desktop. I can drag to the left anywhere on the desktop to zoom in. While I am dragging, I can see how the other icons become bigger and bigger till the previous objects disappear. When I drag towards the right, I will see that the icons will become smaller again and the previous programs will be at focus again. We used the “pan” movement from hammerjs for the interactions. I can imagine a problem that would occur if the user creates a huge amount of desktops, the user could quickly become lost in the space. A solution to this problem could be if there was some kind of marker indicating where the user is located within the space. There is also another problem where the user would accidentally zoom on the desktop with the “pan” movement which could be solved by setting a specific placement of the interaction on the desktop instead of the whole desktop.

The problem with being lost in the space is a common problem when it comes to zooming. In Google maps, the user zooms into a known location but drags further towards the north to see other locations. If the user is not familiar with the area then the user will lose their “sense of where” in the space. The information of the location still exists, it just does not appear on the user’s screen until the user zooms out or find the way back to the location again. For this reason, Bollnow explains that a person who is lost has to use some kind of help to orientate back to a familiar space. Bollnow uses a compass with his example which shows north, south, east, and west but a compass does not exist on Google Maps. (1963, 62) A compass could be displayed to solve the problem. Google Maps is an interactive map of the world thus a compass is a relevant solution which is essential for navigation. With my example, however, I think a marker on the left screen would help the user like I mentioned above. A hovering effect which shows a sidebar containing a map overview of the desktops. Within the overview, there could be a marker which says where the user is located. This solution is similar to the big standing maps of shopping malls which shows the shoppers their location in the mall.

The Stack Example

We made another example on Thursday.  The example includes the problem with overlapping icons and programs. When the user clicks on a stack of programs and icons, these objects will be visually displayed in a vertical row as well as a horizontal row. When the user clicks on a program, the icon of the program will be on the top of the stack as well as trigger it. The program which is recently used will always be on the top. It will make it easier for the user to reach their desired program. This is a situation I specifically wrote about in the inactive example from last week. However, a problem with this example raises the question: What happens with the desktop space if there are already files around the stack?

Smartphones already have a good solution to this problem. The opened folders get a bigger priority than the regular icons which covers the view of the icons if they are big enough. In the iPhones, the folders get even more attention and focus. The applications around the opened folder get blurred out which results in the user giving more attention to the folder compared to other devices. However, the user cannot add applications in the folder while it is opened because of the blur which is possible with the Android phones.

example8.gif

The video shows the example after a user clicks on a stack. In this case, there are five files to choose from visually displayed for the user. When the user has clicked on their desired program, it will open and the program will be on the top of the stack.

 

 

Week 5: Module 2 – Part 1

Monday, 25th September

On Monday, Module 2 started and we got assigned to new groups of two. This time the topic is space (human space to be precise). We will be focusing on categories such as “being familiar with”, “feeling at home” and “dwelling in” within the space. Space is really a broad topic so we will narrow it down to a few themes. We will make examples using html, css and javascript.

The themes are:

  • screen space
  • making space
  • knowing space
  • moving in space
  • work-places
  • body and space

This is a very interesting topic for me. To accomplish the home-feeling, it is all about the options and possibilities. The users have their freedom to make their own choices based on their preferences. For example, background-images, organize the desktop/phone applications, phone cases, and customizable layouts. These are all options that the users can decide and control. The products giving them this control make it easier for them to sense where things are, which is an important factor for familiarity. A comparison could be as following: a couple recently moved into a new house which at first does not feel like a home. But after a few weeks of living there, furnishing the house, and knowing where everything is it will feel more like a home and a familiar space than simply an unknown house.

The literature for this module is a book called Human space by Otto Friedrich Bollnow from 1963. According to Bollnow, we human beings are trained to give more priority to mathematical space than abstract one. Having a specific measurement such as 5cm gives us more pleasure than having to estimate a certain size. (1963, 17) However, do we ever question this three-dimensional measurement or concrete space? There is the whole level of abstract space that is surrounding this mathematical space which we are blinded to.

These statements of Bollnow really highlighted the importance of being aware of both spaces/times and contrasting them in the process of designing and as an interaction designer in general.

Tuesday, 26th September

On Tuesday, I and my classmate reflected on the themes and wrote a lot about them to get a better understanding. We thought about using the theme “making space” because it was interesting how Apple’s Macbook computers have a z-axis so the files overlap each other, making space for the other files. I and my classmate Elias wrote the following paragraphs together:

Themes

Screen Space

Sum:

Screen space shows the space within the screen. There is also limited space outside the screen. Screen spaces are small and in general very restricted. The user can only interact with what the user sees. Even though space is restricted there is more space outside the screen. The screens can be in different sizes. We are able to scroll or move over the material from the outside space. Can we figure out a different kind of interaction without we getting lost in the space?

Making Space

Sum:

Space originates from the word room and an active choice has been made to make it. Space is not an abstract container, but something that has been made.
Making space is therefore not always as easy as it looks if space already has been made, how can we make sure that more is made? On screens, scrolling, zooming, swiping are all ways of making space. But on a more philosophical level, is things like cleaning your desktop really “making space” or are you just re-making the space that has already been made? On a mac, the desktops space is infinite since you can put things on top of each other. Cleaning this space thus does not make more space. The cool thing though, is that since you can put files “on” each other, that means you have a Z-axis of space as well.  

Knowing Space

Sum:

Knowing space is about orientation which means where you are in relation to other things. It’s a sense of where like having a sense of where things are in your environment. On a computer, for example, you have a sense where folders and applications are. Reading a book or article, it will inform you which page you are on so you don’t get lost. Thus, there is the help that we have developed to have a better understanding where we are. Another example is active links on websites that are different than the other links as long as you are on that page. This indicates that you are on this page of the website like a map telling you where you are, in a city. This is useful if there is a lot of links and information on the page.

Moving in Space

Sum:

Since the concept of “here” and “there” exist means that you can move things in space. We can move ourselves. A condition for moving is being in the know. If we are not knowing and/or are disorientated, then we would find it difficult to move in a proper way. When we have reached unknown territory then we must rely on technology for navigation. Navigation is simple moving in space with the help of tech. 


We can also move in digital space. We don’t do it as directly as in real-life space,  but with the help of a touchpad, mouse, pen or similar to manipulate the interface. Since “here” and “there” exists as much in interfaces as it does in real life, we can move things with ease there as well.  

In virtual reality, to avoid motion sickness while moving in a virtual world instead of the real one, the most common solution to this is teleporting which you cant do in real life. In virtual reality, the eyes tell you that you are move while your other senses tell you that you are not.  

Workplaces

Sum:

Workplaces are places where we work both collaborative and alone. Like kitchen, school, and workshop. We have workplaces so we can get work done which is usually places with a lot of space. Workplaces also need to be organized so the workers know where the materials are. An example of digital workplaces is Google Docs, Photoshop, and Trello. Workplaces are designed so the workers can do their work fluently without any questions.

Body and space

Sum:

In real life, we are constantly moving in spaces. We do this with our whole body and we do this automatically without doubt or thought. The same thing does not go for the digital space. Even if we might feel comfortable with the space we use and have on our phones, tablets or laptops, we might struggle to use them as effortlessly as real-life spaces. This is most likely also because we are less in charge of what happens in an interface on a smartphone. We click, swipe and scroll but are dependent on the program to work. Our body responds almost directly to our control and that makes us feel in control. The phone might not respond, and sometimes we can’t do anything about it.

Thursday, 28th September

On Thursday we had a workshop about some of the material including live-server and gestures. With live-server, we learned that it is possible to access the live-server through the mobile browser on a smartphone. This approach is very important if our example concerns the users of smartphones. We can now see in real-time how the examples respond to the interactions through the phones. This will make it easier for us to detect problems or create ideas. I am really happy that we got to learn this approach, I feel like it will be very valuable for future projects.

We installed and started to experiment with a javascript library called “hammerjs” which includes gestures like swiping, pan and pinch. We will use these code samples for our examples so we can work with all kind of regular interactions the user is familiar with. The code that was given to us, is easy to understand and work with.

You can access the hammerjs codes by the following link:

http://hammerjs.github.io/

Friday, 29th September

On Friday we continued to experiment with the code samples and we were focusing on the “making space” theme. We tried to make our own improved examples of the Macbook layout I mentioned earlier with the files overlapping each other. It was a test to see what we can do with the materials. While overlapping the icons and files around on a MacBook computer it can easily become an overflow of objects. We did an example which the user inactive the icons on the desktop which makes them unmovable and easy to delete. I learned a lot about event handles in javascript while doing the example. To inactive the icons, the user needs to double-click on them, right-click to activate them again and delete them by clicking after inactivating them. I wanted to work on the example more before to I upload it visually on the blog. A gif will certainly be uploaded next week.

The Desktop Example

example3.gif

We also did another example including cleaning of a desktop by tapping on the icons (shown in the video above.) We did this to get a better understanding what we were able to do with the interactions. In this case the drag and drop interaction. The user drags the paper from the desktop into the trash bin, cleaning the desktop. It was really fun to play around with the code and create examples.

 

 

 

 

 

Week 4: Module 1 – Part 3

Monday, 18th September 

On Monday, we began with a new example for the module.  This time the example is about the brightness of the environment. We used the function which captures images from the previous emotion example. If the environment is too dark then the camera won’t take any pictures, the environment must reach a certain percentage to trigger the camera. The biggest difference between the examples is the computer vision. The emotion example reads the user’s facial expressions as input while in this example, the system collects post-processed pixels from the camera’s feed as input. The idea behind the example is that the user can always be sure that the taken pictures will have good lighting. For instance, the idea could be a very interesting camera feature of a smartphone. The user could have the function as an option. It would trigger a timer for the camera when the environment has a lot of brightness.

In the example, the user can see the percentage of post-processed pixels, when it reaches more than 50 percent a timer will start without notifying the user. The timer is in 10 seconds and takes a stack of images which the user can scroll through.

The example reminds me of the functions of camera which the user can set a timer for the picture. One of the differences between the example and a regular camera is that the system initiatives to set the timer.

The camera is collecting data with the pixels as input which the user does not gain any information from. It also triggers a timer for the camera. According to the implicit interaction framework, the interaction is background/proactive. As mentioned above the user gets some information in the extent of percentages of post-processed pixels that the system detects, which is foreground/proactive. The user gets the results in form of images that the user can scroll through, the user takes the initiative for the interaction which makes it an explicit interaction. According to the framework, the interaction is foreground/reactive because the user can visually see the results and interact with them.

User Presentation

The users implicitly indicate what they are doing by the result of images. They do not get any notification if they have triggered the camera.

System Presentation

The system shows a number of percentages of the pixels that are post-processed and prints out the images the camera has taken.

Override

The camera will always post-processing the pixels but the camera won’t be triggered unless it is more than 60 percent. To trigger the camera again the user needs to repeat the process.


 

The attributes fit into the example as follows:

Mediated

The user will feel exciting when results become visual and the system detection and initiative will feel like magic.

Uniform

The user has the scroller which is a natural interaction between websites and applications. The user scrolls to be able to see all the images.

Covered

The user is not aware of the system reading the pixels as well as the timer. The user does not need to put any effort at all with the interaction, a room with decent brightness is enough.

giphy (6).gif

The video displays the result after the camera got triggered. The user can scroll to see all the images that were created.

brightness.JPG

The code which includes the brightness and timer.

Thursday, 21st September

Update for the Emotion Camera

On Thursday, we updated the emotion camera example so it would be more different and unique instead of just taking pictures. Instead of taking pictures, the emotions of the users print out rectangles on the canvas. The position of the rectangles, as well as height and weight, is random. The colors of the rectangles are based on the emotions. For instance, red represents the angry emotion. If the user shows the emotion stronger, the color will be less transparent. The end result will look like an art of the user which makes the facial expressions very artistic.

It adds more uncertainty with the canvas because of the randomness of the rectangles, the user can never predict how the canvas will look like. However, the user has the control to actually change the colors of the rectangles by facial expressions which can add to the mediated attribute. The user can change the canvas to an extent focusing on a specific emotion but the system detects the other emotions which make the canvas artwork unpredictable.

The user could have the example active in the background while being at the computer, and after a while see what kind of emotions the user had while working or browsing the internet. However, the problem still exists where the system cannot really feel the user’s true feelings merely the facial expressions which do not reflect the feelings from the inside. It reminds me to some extent of a painter expressing their emotions on the canvas.

It would be interesting to develop this further than just rectangles. It would be more creative if it had more shaps than just rectangles, maybe a circle which extends along with the surprised emotion. That would make the example more explicit than implicit though.

giphy (7).gif

The video demonstrates the example, I use an angry expression most of the time which explains all the red rectangles.

Friday, 22nd September

On Friday, we had the show and tell, our opportunity to show the examples and talk about what we have learned. There was also a chance to hear the other groups’ examples and discussion about implicit interaction. I really liked a group’s example which the user could draw on the canvas by making draw movements in front of the camera. Even though it is a very explicit interaction, I thought it was pretty interesting because the drawing does not need a pen or a tablet, all one need is the camera. Maybe one day, this might be more common when it comes to drawing. I can especially see it in virtual reality.

Another group brought up the discussion about implicitly and how sometimes it might only be implicit once for the user but explicit another time because the user is more certain about the interaction. I think it can be very true and I can compare it with our emotion camera example. The user needs to figure out how to trigger the camera without any information. The user does not take the initiative for the camera because the user does not know how to trigger it. When the system takes the initiative to take the pictures the user will eventually figure it out and make attempts directly to trigger the camera. Then the user is more certain about the interaction and directly interact with it.

There was a lot of similar examples like ours. The camera taking a picture when detecting an emotion was the most common example. Based on that we choose not to talk about the example. We also choose not to talk about the color detection example because the example is too explicit. We talked about the updated emotion example and the brightness example. The teacher liked our examples and we were the first ones to show an artistic example of the demos. There was a discussion about the human’s emotions regarding the emotion example that the system cannot read the user’s true feelings. I don’t think a camera can detect what a person actually feels only based on facial expressions. I think that if the camera detects body language, audio, and facial expressions it might get a more accurate hint about the user’s feelings.

Another group had an example revolving getting an amount of score which can be reached help by facial expressions. One of the teachers said that it is an explicit interaction which I can agree on. The user directly wants to reach up to that score without any information how to get it which is implicit due to the uncertainty. But the score itself is a goal the user takes the initiative and directly wants to reach which makes the whole example explicit.

Summary

In these weeks, I have learned a lot about implicit and even explicit interactions. Now I look at interactions in a different way than I did before. I break down the interaction into small parts to understand who takes the initiative of the interaction and why. I also realized that implicit has become more common in recent years. We as users do not think about some interactions we do and it all happens in the background. For example, I thought about how a company like Google gather information about their users. Users who use Google Chrome through computer or phone, or any other Google product, give their personal interests to Google by searching and visit websites. With this information, Google can give their users stories about their interests or commercials relevant to them. The user gives their interest information indirectly and the system initiatives to give relevant information to the user. The framework of implicit interaction helps me to understand the small parts of the interaction better with its axes.

I really enjoyed working with computer vision. I have never done that before and I got surprised what one can do with javascript. It gave us different ways to think when it comes to interaction because we have never worked with cameras before. We have to make the users interact without their knowledge they are interacting or touching a screen. With computer vision, we people try to make the system understand our movements and interactions without a physical touch, we mostly use this technique so far for security purposes. For example, face recognition, eye scanning, and alarm but even entertainment purposes like Snapchat or Kinect.

 

 

 

 

Week 3: Module 1 – Part 2

Monday, September 11th 

I got a very unexpected cold and fever during the weekend and I had to rest on the Monday of the second week.

Tuesday, September 12th 

On Tuesday, we decided to reflect deeper about our example’s attributes, the implicit interaction framework, and implicit interaction techniques.

Implicit interaction techniques include three different features: User presentation, system presentation, and override. The paper made by Ju et al mentioned in the previous blog post explains the features. According to Ju et al (2008, 7), user presentation is how the users point out what they are doing or would like to have done to the system. Regarding system presentation, Ju et al (2008, 8) explain that system presentation shows the user what it is doing or what it will do. They further explain the override feature. Override allows the users to interrupt or stop the system from engaging in a proactive action. It usually happens after one of the mentioned features a lot the user for unwanted inference or action.

Emotional Camera.jpg

This image presents the framework of our capture images example. When it is tracking the facial expressions of the users, it is background/proactive. The user put the small effort of input in front of the camera while the camera is tracking emotions. When the camera takes a picture of the user, it is a foreground/proactive interaction. The camera automatically takes a picture when an emotion is triggered as well as a shutter sound alert. These events are based on visual and audio feedback for the user. The user will see the new reveal picture visually and raise self-awareness making which is foreground/reactive.

In my previous blog posts, I mentioned that we were interested in some attributes which were covered, mediated, and uniform. How the attributes fit into this example are explained below.

Covered

The camera live stream is hidden from the users so they are not aware that it is collecting data.

Mediated

The user only needs to show an emotion, the user does not know of the interaction and the camera.

Uniform

Interaction is unobtrusive, it is hidden, triggered by emotion.


 

The implicit interaction techniques that I wrote about before, will also be mentioned in the example. The description of the example’s user presentations, system presentation, and override is explained below.

User Presentation

The user knows what he is doing by watching the screen whereas the computer’s camera is displayed. (The user can see in percentages which emotion the computer detects when an emotion reaches to an amount of percentages the camera will take a picture.)

System Presentation

The computer takes a snapshot of the user whenever the user shows an emotion and display a number of percentages of each emotion for the user.

Override

The camera stops reading the user’s face when the user’s face is undetectable. The video will freeze until the user’s face is detectable again.  


  

Thursday, 14 September

On Thursday, we came up with another example for Module 1. We wanted to focus on colorblindness and we made an example regarding that area. The example will detect specific color in a video or camera feed and display the area of the color as well as the name of the color. We choose to only do some specific colors at the time otherwise it will be too many rectangular areas.

It reminds me of printers in some way. They print out papers with color and precise on the color placement. It also reminds me of glasses for colorblindness which helps colorblind people to see colors. The idea about the example has a good meaning behind it. People with colorblindness have a difficulty to see the difference between colors in their daily life. There might be several situations that create problems for colorblind people. A traffic light has three different colors that a colorblind person has difficult to see which can be a problem in the traffic. If colorblind person grocery shopping there might be a grocery that has a different kind of color.

A possibility for colorblind people to make their weekdays easier is an application for colorblind people. An application, they can download on their smartphone which is easy to carry around and always ready to help. I researched about applications for colorblind people and I saw that it already exists. Glasses for colorblind people is an easy solution for traffic lights and other important signs at the roads.

The color detection example finds the color quickly and prints out the name of it. The user will get fast visual feedback. Gaining fast feedback is important in stress situations like traffic lights and traffic overall, to avoid collisions.

There should be a limitation of detection. The example detects all the color which is written in the code, detecting too many colors may lead to confusion for the user. The colors that are of interest by the user should be displayed merely. A solution to this problem could be that the camera only focuses on the object closest to the camera.

It is difficult for us to know how a colorblind person would experience the example because we don’t have the same problem with colors. For us, the experience should not be anything special. In this case, it would be interesting to find a person with colorblindness and see how the person reacts and experience the example.

I think the material as a whole could be interesting for photographers or artists. It could detect the colors of a sunset for an example, making the camera trigger when it detects a specific color. The photographer would not have to take the picture manually and wait for sunset. A comparison could be with a speed camera which takes pictures of cars that does not keep the maximum speed.

Colour Detector (1).jpg

This image represents the framework for color detection example. The live stream collects input from the live stream camera which makes it background/proactive. The detection of colors indicated by colored squares and labels, the detection is foreground/proactive. The accessibility provides access for users with colorblindness that is reactive/foreground.

giphy (4).gif

The gif shows how the rectangles cover the whole areas where the color exists and outside the rectangle, the name of the color displays.  In this video, the system specifies on purple and yellow.

color1.JPG

The picture displays the code tracking down the colors by “tracking.colortracker” and prints it out on the screen.

 

giphy (5).gif

Here is the example with a live stream video. It detects the color magenta because of the brightness in the environment, although it detects magenta, the cup is actually red.

The description of the color detection example’s user presentations, system presentation, and override is explained below this text.

User Presentation

The indicated colors presented to the user by an object or a video. 

System Presentation

The system shows the user by printing out a colored rectangle of the area that has the color. It also prints out the name of the color outside of the rectangle in the right corner. Therefore the user will know what color the objects in the video has. It only prints out some colors otherwise, the video would be overflowed by rectangles.  

Override

 The user can stop the further input by pausing the video.


 

The attributes fit into the example as follows.

Covered

The system detects the color of objects which can be excited for colorblind people to understand the color.

Mediated

The colors of camera or video will be displayed as rectangles. The user only needs to play the video or have objects on the live stream. Due to the brightness of the environment the user might be uncertain of the object’s true color.

Uniform

The system will be triggered when it sees specific colors.


 

The problem with this example is the implicit interaction. Scanning is an explicit interaction where the user is certain about the scanning of the objects. The specific colors are also initiated by the user through the code. To make it non-explicit, we could have the colors being selected by the computer instead, for instance, the system would choose colors randomly from a list filled with colors. This would prevent the user to know exactly what is going on with the interaction. This would make the example more implicit, giving the system the initiative to pick colors. We would definitely add this if we add more time. The system collects input though from the live stream and shows it in the foreground as rectangles but the selection itself could be more proactive than reactive. Currently, the example is too explicit for the module.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Week 2: Module 1 – Part 1

Monday, 4th September

On the second week of the class, we began with the first module regarding implicit interaction and computer vision. In pairs, our task was to modify a few Javascript demos giving by the teacher and reflect the implicit interaction of the experiments. We are not supposed to create concepts or ideas but gain knowledge about the topic.

When I first heard about implicit interaction in the lecture, I realized that implicit interactions exist frequently in my daily life. For instance, walking into automatic doors in a store or the university,  opening my phone case so the phone displays the time and the possibilities of IFTTT (If This Then That). IFTTT allows the user to trigger conditional statements of different services called applets. I use some of these applets that classify as implicit interactions. I made an applet that turns off and on the WiFi when I leave my home and vice versa. Instead of manually interact with the settings, I am letting IFTTT do it for me instead which reads my location as an input. Another example is if someone calls me but I am not able to pick up the phone then the ringer volume will increase to the highest volume. This applet helps me to find my phone or pick up the phone. It reads my missing call as an input and increases the volume without me having to interact with it. IFTTT gives me, even more, possibilities to create implicit interactions that are beneficial.

Thanks to our smartphones, computer vision exist in a shape of applications, for example, Snapchat. In Snapchat, the camera detects the user’s face to which the user can add amusing filters. Moreover, a product that also has computer vision is Microsoft’s Kinect that reads the user’s whole body which can be used as a controller in games. Computer vision does not only mean detection of humans but also non-living things as well. An example of this could be a back sensor of a reversing car detecting the surroundings behind it.

We had to read a paper on the topic called Range: Exploring Implicit Interaction through Electronic Whiteboard Design by Wendy Ju, Brian A Lee, & Scott R Klemmer in 2008. In the paper, the authors display the implicit interaction framework that is based on two axes: additional demand by the user and initiative by the system.

interactivtiy.JPG

According to Ju et al (2008, 2-3), foreground interactions need focus, concentration, and consciousness while background interactions are the opposite which avoids those needs. Interactions that are initiated by the user are reactive and interactions initiated by the system are proactive.

With this in mind, I want to get a better understanding of these terms so I am going to use them for my own examples from my paragraph above. The example with IFTTT and WiFi would be labeled as proactive/background. I should notice that the user can decide either notification should display for the user or not. In my case, it is proactive/background because I think it is annoying to have too many notifications. In the other case with notifications, it is reactive/background due to the notifications indicates that the user should focus on the triggered event. The example about a sensor in the back of a car is foreground/reactive according to the axes. When the sensor detects the surrounding the car will display audio feedback and visual feedback for the user. Both audio and visual feedback increases, the audio increases volume whereas the visual changes color and increase signals while approaching the closest detected object. When the car cannot reverse anymore without getting damaged, the visual feedback will be colored red and highlight the last signal while the audio will reach the maximal volume. The increasing feedbacks indicates that the system wants the user to increase focus on the situation, with a volume and visual which the user can impossibly ignore. Preventing the driver to damage the car. All the interactions that put the car in motions are made by the user through steering wheel movements, pedals and changing gears. Thus, the interactions are initiated by the user and therefore that part of the example is foreground/reactive. The sensor is always collecting information in the background without telling the driver until the sensor is close to an object. It collects the data without the initiative of the user, the system does it instead which makes it background/proactive.

The example about a sensor in the back of a car is foreground/reactive according to the axes. When the sensor detects the surrounding the car will display audio feedback and visual feedback for the user. Both audio and visual feedback increases, the audio increases volume whereas the visual changes color and increase signals while approaching the closest detected object. When the car cannot reverse anymore without getting damaged, the visual feedback will be colored red and highlight the last signal while the audio will reach the maximal volume. The increasing feedbacks indicates that the system wants the user to increase focus on the situation, with a volume and visual which the user can impossibly ignore. Preventing the driver to damage the car. All the interactions that put the car in motions are made by the user through steering wheel movements, pedals and changing gears. Thus, the interactions are initiated by the user and therefore that part of the example is foreground/reactive. The sensor is always collecting information in the background without telling the driver until the sensor is close to an object. It collects the data without the initiative of the user, the system does it instead which makes it background/proactive.

Tuesday, 5th September

On Tuesday, we had a workshop regarding the materials that we will use for the examples. The demos were made in Javascript and included face detection, facial expressions detection, frame processing, object detection, and links to several other demos. I was surprised to see the potential of what Javascript is actually capable to accomplish.

After the lecture, I and my classmate tried some demos to understand the concept better. We thought these attributes were the most interested to use for our experiments:

–  Covered: magic, excitement, exploration, action-mode, witchcraft, deeply impress somebody

Mediated: uncertainty, ambiguity, magic, handing over the responsibility (the interaction happens somewhere else), you don’t put much of yourself in it

Uniform: influenced by intuition, control

 Thursday, 7th September

On Thursday, we tried to come up with ideas and start with the coding. My classmate did not have any experience in Javascript coding. However, the main priority is not the coding but the implicit interactions themselves. We thought there were possibilities with the emotion-tracking example.

We came up with an idea that the facial expressions trigger a snapshot of the camera if the user shows an emotion. With teacher’s help, we got the code to work but when we showed an emotion the camera kept on taking pictures. We tried to fix the problem so the camera only takes one picture at a time, but we did not succeed. The camera will only be triggered if it detects happiness on the user’s face. We were not sure if we were going to add the other emotions next week. The function of the example right now could have a nice meaning behind it. The camera only accepts pictures of people smiling. The users will remember that they have to smile to trigger the camera and the gallery of images will only include happy images. However, it does not mean that the user is actually happy while taking the pictures.

The example of facial detection reminds me of the application Snapchat. Snapchat will detect the user’s face, gives the user the option to change between several entertaining filters.  The user can add emojis and text into the taken picture and send it to a friend. But I have never seen an application that tries to detects one’s emotions based on facial expressions or taking a picture automatically as well.

I think it is really inspiring that one can make this with Javascript. It feels encouraging to keep on learning and gain further advanced knowledge. The possibility to scan a human face overall feels like science fiction from a futuristic style of movie. But the technology has advanced a lot. I know in video games they scan the face of the actors so the characters in the games will have the actors’ actual face. Currently, the technology is not available as in a product or system security. Security-wise fingerprint scanning is available so far on some smartphones and also face recognition to some extent. It feels also encouraging to just think about the potential of the technology in the future. For instance, facial expressions and scanning could be the new way to unlock a door or a car.

If we use facial detection as security then the technology must work flawlessly to avoid burglars get access to the house. I think that systems or software that requires a lot of interactions would be difficult to a function like a face detection. Softwares like a game where the player move and interact with the character by using facial expressions. I think in a long-term that would be exhausting and painful for the player.

The qualities of the example are that it is easy to use and the user does not have to interact with it physically to take a picture. It can also serve an entertainment purpose that it takes funny expressions at the moment which the users can review and laugh. A voting game could be a good use of the idea where the players vote for which facial expression they think is the funniest. This creates good possibilities for social interaction. However, the interaction would be more explicit than implicit because the user directly interacts with certainty.  One big tension for this example is how can the system know how the user truly feels? Facial expressions are easy to fake and do not really displays feelings. It could have some sort of artificial intelligence that learns by reading the user as a starting point to increase accuracy.

I think most people have experienced the same features to some extent through applications like Snapchat, which I mentioned earlier. A similar system does also exist in Facebook’s Messenger and Instagram. Generally, in other products, there is not much experience with it currently.

When I mentioned about facial expressions as a game, it could actually be a good way to expand the example towards entertainment. The game could have several rounds and each round would have a specific emotion. Turn-based, the system could take a picture of the players expressing the emotion as good as they can. In the end of the round, they could vote who had the best expression of the emotion. Another way to expand upon the example could be through social media. Instead of the user setting a timer to take a profile picture and so on, the camera could be triggered instead by smiling, for example. Therefore the user can be sure that the camera will take good pictures. There could even be more advanced settings for smiling, opened eyes, mouth and more expressions to make sure the user get the needed profile picture.

As for facial detection for security, there should be boundaries like mentioned earlier with burglars. A photo should not be detected as a real person otherwise it would be a tremendous advantage for the thieves.

We noticed that the facial detection did not work well with my glasses. That would be a problem if it existed on an actual product because bad eye-sight is very common. The application Snapchat, however, works very well with my glasses.

There was a problem with printing the images on the screen. The images got added beside each other in a row very quickly which causes the problem that we did not have time to pay any attention to change. It would be easier for the user if it did not take as many pictures at a fast rate.

giphy.gif

The GIF displays me trying the example as well as the code on the left side of the screen. When the camera detected that I looked angry, it took a stack of pictures.

 

code1.JPG

Here is a better image of the functions in the code which trigger the camera. Currently, I have only the happy and angry emotions working. When the happy emotion is equal to 70 or greater, it will trigger the camera to take a picture.