I summed up my previous couple of weeks’ progress in the form of a presentation. Some feedback I received includes:
- Be careful about turning this project into a cold, sterile, pragmatic, primarily geeky endeavor. Remember that these are people I am designing for. (On of my solutions to this is to make conveying emotion and inflection a priority in the UI.)
- Maybe instead of the phone/tablet-based solution seen in the sketches below, I should pursue a different type of technology, such as an ASL-detecting peripheral (hello, MX!). However, the biggest problem there would be the same issue with just plain texting; aspects of natural language, such as emotion and inflection are lost, not in translation, but in the capturing of the content. So maybe I find a way to capture facial expression too?
- Show emotion, emphasis, and subtext formally, through various “designery” things, such as expressive typography, color/gradients in the messages themselves, or icons/emoticons.
- Sketch some possible visual UX/UI directions for the app as a whole.
- Develop scenarios for each aspect of the app.
- In-depth sketches/wireframes of each scenario (a few directions).
And here’s an updated schedule/timeline for this project: