From reading the week 1 articles, it is clear to me that I did not understand the cognitive psychology aspect of HCI. There is so much more to product design than I ever thought possible. One aspect that I noted was the article discussing the impact that accidents and mishaps have on companies. I did not foresee such a large monetary impact on a company that was unrelated to lawsuits, but after reading it, it makes sense that things would need to be drastically changed to prevent such issues from happening again. I never imagined those changes costing a company so much money. Through this, I have gained a new appreciation for the necessity of human factor reviewing and trying to prevent operator error. I will also be able to better appreciate these things in my own element of biology.
I was personally hoping that the articles would give more examples of the cognitive aspects of HCI. I did appreciate the review of types of memory, and learned more about the rehearsal loop in detail. However, some of the things that the articles went over seemed to border on common sense, at least for me. I would love more concrete examples of HCI at work; for example: this group of people made a website that was difficult to navigate, so this is how they fixed it after observing others use it.
RE: Week 1 Reading Reaction
Based on your reading, especially the rehearsal loop, what about the articles was obvious to you? Did anything pop-out to you as applicable to your area of interest or research? I too, would love more concrete or application examples but perhaps these discussions could also help start those discussions and how other people in class have or will apply these theories, methodologies, and examples to their own projects.
I would like to start by providing an example of how I usually describe a dumbed down version of the UX process in relationship to software development usability testing. Imagine you are the UX designer/researcher and the user interface designer has already developed a web form. The web form consists of a simple page with a few fields, a submit button, and a success page. Users in the test group are asked to complete the form and the UX researcher, you, then compares their progress with a baseline they set by completing the form yourself. The fields on the form might include things like name, email address, and a phone number. So, you estimate it takes you approximately 2 minutes to complete the form, but you helped create it so you give a 3 minute leeway because of your experience with the form. You want everyone to make it to the success page, in under five minutes, without errors. A number of things might happen with this form, perhaps the users are taking longer than expected, maybe they are getting errors, or maybe they never even complete the form. Your job is to use analytics and other tools available to determine why things are not happening as expecting, to improve the process, and try to make this form and process get completed as efficiently as possible. A few obvious things may come to mind, so you review the form. Maybe it needs labels, browser testing, perhaps there are form field requirements that should be described in a description under the labels or in a tool tip. An example of that would be if you do not allow certain types of email addresses or if the phone number field has to have dashes. This may be a point where you discuss with the development team if dashes are necessary because they could be added on the back-end automatically. You may also consider in-line validation, but for accessibility reasons you would also need a back up method for delivering error messages. I prefer a list at the top of the page that is bulleted and links to each error. The goal here would be to not only determine the errors but try to prevent any bottlenecks from ever occurring. Now, imagine all of those things are fixed but there is still a number of people not completing the form or taking longer than usual. This could be due to external factors such as a home/work distraction. In that case you may want to check if they went to help documentation on the site or tried searching for another way to finish the form. You could track a heat map or where their mouse goes. You might also consider using behavioral flows or looking up their on-site search results. Regardless, it should be assumed that at least a couple people on public sites may have unfamiliarity with the application, technology in general, or may simply have computer issues or other external factors. If it is a corporate application, some of those things may be ruled out. In which case, it may be good to do observational testing where you assign tasks to users and in-person watch them complete the form to determine additional issues impeding the success of your criteria.