Feedback/Ideas for and Audible Applications

Recommendations for Audible

I wish that I could provide feedback directly from the iPhone or Android app. I wish that if the application crashed it would save which items have been read or marked as finished.

In the list of books it would be nice to see how many stars each book has been rated and to be able to sort them by rating (IN APP).

I would like to be able to login to the app with biometrics (thumbprint). The android app integrates the store better than the iphone app.

When rating books, there are too many clicks required. Seems like the multiple stars could all just be on the initial screen (with the list of recommended books) rather than clicking overall stars, being redirected, then submitting and going back. When doing this to multiple books it becomes very tedious.

It would be nice to integrate the Listener page better with other book lover’s social media like Shelfari and GoodReads or even blogs so that feedback can be auto posted to Amazon, Shelfari, GoodReads, and Blogs.

Why haven’t improvements been made to the gamification of the Audible application? I earned most of the badges a few years ago, but no new badges ever appear or rewards for earning them, or the ability to compete with friends. Most of the application seems to just revolve around the antiquated “share feature” which in its default form is annoying and spammy. I read a lot of books and if I shared every book, every badge, and every other thing from the application with the default verbiage… it would annoy my friends. There seems to be little motivation to this, but if people could earn points or rewards for commenting, inviting friends, sharing books, and writing reviews – more people might be involved. It could track the number of people invited, make a competition out of books read, or for every 100 books offer 1 free credit.

The tracking for the number of titles in my library seems to always be off. It would be awesome if books returned would show up somewhere. I’m not sure if I would accidentally repurchase a book I didn’t like, but at least I could see the books I rated poorly and the “similar” books recommended.

via Blogger

Consider Good Human Factors

“If consideration for good human factors is given early in the design process, considerable savings in both money and possibly human suffering can be achieved.” — Wickens, Lee, Liu & Gordon-Becker: Chapter 1 of “Introduction to Human Factors Engineering.”
“The system must support the user’s task: if the system forces the user to adopt an unacceptable mode of work then it is not usable.” — Dix, Finlay, Aboud & Beale (2003): Introduction to “Human-Computer Interaction.”
“In other words, companies would be wise to invest early and often in a well-thought user experience for their products and services. Too many companies believe that their brand recognition will suffice when launching and maintain products. This is just not the case anymore in our day and age. A successful company will invest time into testing the usability of their products to ensure they aren’t wasting time and money focusing in the wrong areas.” — Ryan Stone

via Blogger

Good Vs. Bad Product Design.

Charles Di Renzo
Week 2 Reading Reactions
I liked the Wickens reading the most out of these 3. It covered the different types of experiments and under which conditions to use them. This was very fascinating to me as I love the science involved with HCI. I spoke about it in my last post, but I’m always thinking about whether or not a product can be ‘objectively good’ and if so will that mean the most products are headed down a road to imitate one another in the hopes of the being the best product on the market? I think we all feel that products can be ‘objectively bad’ because there are products that are either hard to use or they don’t accomplish their purpose very well, but can a product be deemed ‘objectively good’ if there are people who would prefer a different interface? Either way I enjoy hearing about the science and research that goes into everything. I was also glad to see the discussion of P and t values and their importance in research, as it was a really interesting part of my economics major to see how studies were conducted. I hope that my previous experience with statistics will come in use during this graduate program.

I also enjoyed the Dix Ch. 9 article and how they discussed the Heuristic evaluation, we had touched on these in my undergraduate ‘Usability’ class and I enjoy using an app or a webpage and thinking about how they’ve been applied.


I like that you pointed out Wicken’s experiment types and the objectively good/bad products. I didn’t really consider discussing either of these in my response to this weeks reading, but the way you responded reminds me a lot of other readings and lectures I’ve been to on how products are invented or revolutionized. Too often, it seems products are given “extra features” for the sake of doing something meaninglessly different that distract from products, but sometimes they are done poorly.

I own the Samsung S2 Smart Watch. I like it a lot. It functions well as a phone and a watch, but beyond that it holds little value for me. It COULD do so much more (I’m trying to learn to build my own apps for it). I was also extremely disappointed that the SmartThings app is not in products yet for the S2 (which it was deceivingly advertised and is also owned by Samsung).

Mechanical keyboards now come with distracting, but very cool lights. Makes a great expensive techy purchase, but I also wonder at the necessity (though maybe I’m a hypocrite as I type on a mechanical keyboard at work.. but I don’t think it’s really that much better than my built in laptop one).

Shoes can track your steps, but do little more than that. The sensors is inconveniently on the bottom of your dirty shoes. It is dependent on a phone.

Smart home devices are extremely expensive, for what they are and how poorly they are built. Many of the devices are built for battery installation so you do not have to wire them into a wall, but there are rarely other options to do this without the battery. Why don’t they come with rechargeable batteries or solar panels?

It just seems all too often, things are designed because they are cool new ideas without trying to solve problems or ask the question is there anything wrong with the current design? How could this be better? OR they do ask the questions and do not try to encourage REAL conversation or negative feedback. Growth in HCI often comes the most from negative feedback, not feel good answers.

I really like the idea of doing a pilot study before the real study. This makes a lot of sense as a test run, but ideally in software development, I like doing multiple evaluations and tests. The process becomes very cyclical: build, test, revise, repeat. The problem with this is the software is only “new” once. Finding lots of users who are unfamiliar with the system can be hard for me since I work in an industry where everything is secured or confidential.

Something else I wondered about the multitasking study was if the users had high familiarity with the types of tasks, perhaps the task itself was hard for them? In which case, does that reflect on their multitasking abilities? I would probably consider myself a HMM or high media multitasker vs. light media multitasker (LMM). I wonder if the results from the study or takeaway could be used to identify self-deficiencies in this area for improvement and what multiple follow-ups to the test would produce.

I doubt that picking out red or blue rectangles would be hard, but you never know. Perhaps the task was distracting in itself because it was boring? I wonder if motivational factors should also be considered. Since the HMM and LMM are also self-identifying, wouldn’t it be best to have a survey or tasks that also confirm their assumptions? Like, what makes you a multitasker? Do you frequently have multiple tabs open at the same time? Does it bother you to have music or a TV on while you are doing something else like work or homework? Do you perform well in high stress or high anxiety situations? Do you like to work ahead? Maybe also define for them, better, what that means?

Lab tests are never going to be just like real life. They are unfamiliar spaces for the participants and people are aware they are being watched in a way that is hard to forget. So, another question I have is how different this would be if it was made it to some kind of similar test or even a video game. That way it could be tested on a larger audience without the nuances of lab testing. (Wickens, 2008) does discuss that there are varying methods to testing that involve less controlled and more realistic observation. I would like to see how some of these tests could be conducted using mixed methods. How they are setup, analysis is done, synthesis, and learn more about the backgrounds of individual researchers.

via Blogger

Connecting humans to computers

Lydia Hardie 
Week 1 Definitions
As someone coming in with more of a psychology background than anything, I found room for elaboration in the definition of a computer provided by Dix.  It states “By computer we mean any technology ranging from the general desktop computer to a large-scale computer system, a process control system or an embedded system.” 
I think much of the lack of clarity comes from the use of the word in its definition.  I guess, I was largely looking to this reading as a guide to what qualified as human-computer interaction, especially in light of brainstorming topics for the research project. The opening example of this article in which the delete button is pressed instead of the save button seems as though it could be quite analogous to poor button layout design on a simple calculator, but I do not think that the simple calculator qualifies as a computer.  Likewise, the automatic syringe example seems very similar, but again this is not a computer as I know it.  Furthermore, the example research project regarding prosthetics also does not appear to include a computer, or at least one that meets my current understanding.
                On the other hand, I find that Reisberg simply and adequately defines cognitive psychology “as the scientific study of knowledge,” and sufficiently elaborates on what this encompasses.  I do realize that with my background of psychology, my requirements for a definition are less for cognitive psychology, though.  Lastly, in a bit of irony, I was amused that Reisberg’s use of the computer as a metaphor for psychological explanation perhaps provided the best attempt of a definition for a computer as information storage and retrieval were emphasized.
I agree with your summation of the lack of computer-based examples in the reading, at least computers in a software development sense: that include a monitor, hard drive, keyboard, and a mouse – heck even more tablet or phone interface examples would have been nice. I did enjoy the readings as starters for understanding psychology, how humans interact with machines, and the underlying basic principals of psychology. I guess I’ve never really just thought of cognitive psychology as “the scientific study of knowledge” but I’m sure that is true. I do not have an extensive psychology background. I know it relates to perception, learning, experience, and obviously knowledge but for some reason have always considered it more like a path within the mind and how things are processed. Like a railway connecting to various stations. The rail line being just as important as the cargo, destination, or train. Maybe my love for hyperbole and parables is why I enjoyed the reading from Daniel Resiber so much. He used analogies to describe abstract ideals and draw conclusions while posing additional questions like why do we do what we do and how?

via Blogger

Considerations with Multitasking

Katherine Anthony
Reading Reaction, Week 2 – through the eyes of a journalist
I found the “Cognitive control in media multitaskers” reading to be extremely interesting – especially because my area of emphasis is in journalism. We have frequent discussions about the evolution of multitasking in media and the impact that has on attention span. From a content perspective, we write to appeal to an audience that has a short attention span and who is likely preoccupied and just wants the down and dirty facts. That’s where the inverted pyramid comes into play – most important details at the top and information becomes less important further down the piece.

That being said, the results are quite startling to me. While I think it makes sense to say that HMMs struggle to filter irrelevance, I don’t know that the distractions that they may encounter ignore the LMMs altogether. It certainly shows that it’s an easy distraction for HMMs but depending on what that distraction is, LMMs could be just as susceptible. In my opinion, the way that media is approached – at least “text” media – both print and online – the audience that is being catered to is actually the LMMs and not the HMMs since it is the LMMs who have the ability for top-down attention.

With the evolution and availability of different media platforms (cell phones, tablets, PCs, laptops, Smart TVs) since 2009, I wonder if these results would be the same today. The reason I’m curious about this is because, especially during population sport seasons, PIP (picture-in-picture) is heavily used to watch multiple games at the same time. However, you don’t often hear or see that the person consuming all of that media cannot differentiate between what is and is not relevant – aka what game had a significant moment over the others.

In support of the situation I just mentioned, The Verge recently did a follow-up to the NBC Olympic coverage (or lack thereof when it came to quality coverage) and they criticize their lack of PIP options for watching multiple events at once. That article can be found here:

Do you think that enough has changed in the last 7-ish years to repeat this study and compare for the change (if there is any and exploring why/why not)?

John Culliton
RE: Reading Reaction, Week 2 – through the eyes of a journalist
Hi Katie, Great article. I had never really considered the PiP option as a way to parallel the study until you mentioned it, but it makes sense. As you say, most people wouldn’t have a major issue remembering what event happened at what game. I wonder if the ability to utilize multiple media platforms without distraction could be impacted by “what” those platforms are providing. For instance, in PiP you may be watching multiple sporting events, which relate quite well. However, if the different platforms are providing a spread of unrelated information, such as Facebook photos, CNN news articles, and ESPN on in the background, which are unrelated topics, would the effects be the same? Thanks for the response. Jack-

Shelby Gosa
RE: Reading Reaction, Week 2 – through the eyes of a journalist
To expand on what you suggested, John, I would also wonder if the type of media has an effect?

For instance, radio is audio only. Information coming from places like Twitter and Facebook is text or reading only, the nightly news can be both audio and sight, etc. Some people are perfectly comfortable listening to the news on the radio while also reading articles online. Is there a difference in audio vs. sight related media? It would of course be difficult to listen to multiple audio things at once, but I would be interested in finding out what is more/less distracting, multiple media feeds with or without audio? Can you balance audio+visual feeds in the same way as you do multiple visual feeds? Is the level of distractedness different, or do audio feeds more or less equal visual feeds?

I feel as though the audio may have a significant effect, though one would have to take into consideration the tendency of the users to “tune out” things that they don’t want to hear.

Desarae Veit
RE: Reading Reaction, Week 2 – through the eyes of a journalist

I like that you brought up the correlation between types of media and whether or not they are distracting. I’m sure it is very much dependent on individuals, but I would like to use my coworker and myself as an example.

I have been doing UI/UX for over a decade and often like to listen to audio books while I work. I retain the information fine and even write book reviews on a blog, in my free time. I can only listen to the books when I code. It’s a lot the tunnel vision. If I’m in the groove I can complete a whole website, after the strategy is in place, without thinking too much about it. I can not do this while doing other tasks, like writing a paper.

My coworker likes to “multitask” with tabs but could never have other auditory distractions.

Katherine Anthony
RE: Reading Reaction, Week 2 – through the eyes of a journalist

That’s a really interesting question! Perhaps it’s an unexplored element of HMMs/LMMs in assuming that what they would be watching would be related (definitely what I did!). I don’t know that I’ve ever considered using PiP for multiple focuses – typically it’s all sports or all news – I hadn’t even thought to stream news while watching sports or vise-versa. But, to argue with myself, would it be considered low or high multitasking if you’re using PiP? Would someone be considered a LMM if they’re watching all one “subject” and a HGG if they’re watching different “subjects”?

PiP isn’t extremely new, though it’s rarely discussed unless it’s in criticizing something or someone — so while I thought the study that we read could have been viewed as outdated, I think it’s safe to say that it’s safe to say that it absolutely is outdated because of all of these new availabilities. Absolutely fascinating to think about. Thanks for bringing that up!

Desarae Veit
RE: Reading Reaction, Week 2 – through the eyes of a journalist

I did not consider the age of this study. Bravo to you for pointing that out. Considering smart phones and advancements in gaming, computers, and television- I would absolutely say this study could use an update. Who knows, maybe the data would be the same but I see a few factors in the original study that also gave me pause to question the data’s accuracy (discussed in my response). When I was growing up the internet was dialup (you remember the AOL discs?). It took minutes, sometimes well over 10 for one page to load. Now, we get mad or leave if a page takes more than .3 milliseconds to load (I read that in a study once but says 2-3 seconds). The same original study also discussed that most users only give pages 2-3 seconds to determine if it was relevant before deciding to leave or engage with the site further. My experience as a designer and someone who loves reviewing website analytics would correlate with that information, since my bounce rates seem to occur high and quickly on landing pages or almost not at all.

via Blogger