The Wilson Project Blog of UX/Front-End Developer Ivan Wilson

Menu Skip to Navigation

Tag Archive / user interface

  • Gesture-Based CSS Selectors

    [This is a short, informal proposal of a concept, with a more extensive version in the near future. However, this should be enough for starting discussion on its value/implementation.]

    What are Gesture-Based Selectors?

    Gesture-based CSS selectors are a combination of the current set of CSS selectors/pseudo-selectors to include new set of gestures currently being used in mobile (tap, tap hold, swipe, etc.). The main goal is to give basic interactivity to elements without JavaScript support.

    The closest analog would be the same sort of relationship with CSS3/JavaScript animation. In the current methodology:

    • CSS3 – basic/simple animations, removing the need for JavaScript resources
    • JavaScript/JavaScript Library via Events – complex animations, requiring more functionality beyond CSS3 animations

    Initially, this was just focused to touch gestures. However, this can be also be extended to other "gestures" such as the current JavaScript events "click", "mouse(over/out)", "key(up/down)", etc. This will allow for non-touch gestures → keyboard, voice/speech, inputs from any other devices, etc.

    CSS/Gesture Selectors Format

    The concept is to have these represented in CSS code by the following selector (in two versions, using the mobile gestures swipe left and tap hold as examples):

    1. selector:gesture-(gesture name)

      Similar to :hover, :focus pseudo-selectors (using current

      Examples:
      div:gesture-swipeleft { CSS code }
      a:gesture-taphold { CSS code }

    2. selector[gesture=”gesture name”]

      Similar to attribute selectors, introduced in CSS 2.1 but in use more with CSS3

      Examples:
      div[gesture=”swipeleft”] { CSS code }
      a[gesture=”taphold”] { CSS code }

    Constraints/Problems

    At this point, three initial problems appear:

    1. New Devices/Platform w/ Gestures – some sort of path/procedure needed for recognization of new gestures that is open and flexible. I recommend the following for an initial path:

      Device/Platform Support → JS event support → CSS/gesture selector available

      This will allow for the availibility for selectors mention above. However, there is the problem of device-specific gestures, which could mean that certain selectors will be available for only those devices. Currently, the touch events are only available on touch enable devices. If a [front-end] developer was writing code for a non-touch enabled device, the JS support would be non-existant -> no CSS/gesture support

    2. Gesture Uniformity – if a gesture gains support across platforms/devices, the related CSS selector should never be prefixed as we currently have for CSS3 features like animations or tranforms, ie "-iphone","-android", etc. I recommend this because unlike CSS3 features, these are OS based issues ("native"). The selectors will be device/platform "neutral" – let platforms/devices be solely responsible for interpretation.
    3. Computer Processing – if this gets implemented, how does this effect processor/power usage. How does the current set of CSS3 features (transitions/animations) deal with different computer processing power (hardware accelerators?)

      If this gets added to the current set – will there be a bigger demand for hardware processing? What would this mean for small, less powerful devices? (the drive behind iPhone design and construction).

      Overall – This is an issue that will need to be taken up post-proposal, with people more knowledgeable than me.

    In summary,

    1. Creation of CSS/gesture selectors to allow for basic interactivity, without the need of JS events
    2. Providing a “path” to allow for more gestures, not just current but for future devices/platforms and be platform/device independent.

    Flickr – Scans of hand-written notes from Polaris notebook

  • A Brief Musing on the Supposed Separation Between Mobile and Desktop

    A couple of months ago, I was looking at a promotional video touting some new technology. Something that was written solely for mobile. They went on about their processes, that by focusing not on the desktop, they were saving file size and increased performance, which is all and fine. Anything to life better, especially on those days when I want information without waiting for everything to compose itself during the morning rush hour. But at the end of this, I wanted to ask this question (which, in hindsight, I should have added to the comments):

    Why should there be anything difference between the desktop and mobile?

    Before answering, think about it real hard.

    Don’t worry. I’ll wait…

    The current paradigm of mobile is based on two things; the mobile phone and the tablet. But isn’t this the same sort of think we had before – the PC as the desktop. Didn’t we got over this? I got over this years ago, especially when my previous job required me to work with both Windows and Linux.

    What I am thinking is that the current paradigm is just as short-sighted.

    Let me put it this way? In a year or less, why not see a mobile device become the desktop?

    Why not give the "desktop" have touch-enable events like its mobile cousins?

    What I am imagining is the mobile/desktop schism not just disappearing. It simply gets redefined.

    If my life is nearly almost located within the confines of my mobile phone, why not go all the way?

    Our perception of the desktop is that of the monitor tethered to an external hard drive and et. al. What about a rapidly approaching near future where our version of the "desktop" is a mobile device tethered to cloud storage.

    Just a thought?

    Don’t wait too long.

  • What’s in a Title (of a Block of Related Blog Entries and Other Assorted Writings)

    After a few months of delays, I am finally starting. But even figuring out the title seems to be a bit of a fight. Well, the title will be just simply enough The Information Layer.

    However, there is the “formal” name that gives more of an explanation of what this groups of writings will be about, based on the following:

    The Information Layer
    A group of writings/blog entries based on the observations of the mobile/web environment
    by Ivan Wilson

    Beyond that, still working on the initial step.

    Of course, I just realized that there are two initial steps and I have to figure out a way to work this out.

    Pain.

  • (How) Geocaching Taught Me To Visualize Information

    (Originally published on CDG Interactive/Innate blog)

    Earlier this year I attended the IxDA Interaction 11 conference in Boulder, Colorado, one of most important in the field of interaction design. At this conference, I’m both an observer of the field and a student, gleaning ideas I can use to improve my own skills.

    In my three years of attendance, I’ve learned that interaction design is not just about building products — it’s also about how to visualize information. One of my jobs is taking HTML/CSS code and making the content not only visible but easily accessible to the user. At a time when massive amounts of data are freely available, finding ways to making information not only understandable but also easier to use and manage has become its own field of study.

    Making Information Visual

    So what does “visualizing information” mean? For example, take a basic weather map we see on the local evening news. Apart from the physical location and geographical borders, information about air pressure and temperature are shown in graphics, instead of a straight list of numbers and statistics. This same information is given some sort of context (work, travel, agriculture) for viewing. From this, we make judgments about what activities we want to do at a certain time period (going to work, growing crops, flying on vacation).

    We can use maps to overlay any information we want, depending on what we need or want to do. But one use of a map is something that we have been doing since childhood: using a map with information to find something.

    And that’s just what I did at IxDA in Boulder. Each year’s conference has a different twist, depending on the location. Home to The University of Colorado, Celestial Teas and a growing tech/design area, Boulder is nestled near the Rockies and has a reputation as being eco-friendly. Well, when looking at the list of Friday activities, I found one that caught my attention: Geocaching: Treasure Hunt.

    The “Treasure Hunt”

    Geocaching is a hide-and-seek game where finders use GPS units or GPS-enable mobile phones to find caches. These caches are containers that have various items inside. They can be as large a small Tupperware case. Or they could be micro caches, which can be as small as a 35mm film canister. Either one, these are located in publicly accessible areas and are hidden from view. However, the owner of a cache will leave some clues (text, title name, or images) for the finder to locate it. Of course, having a GPS unit does not mean it will be easy to find. If you have ever used one, you know that there is a certain range of accuracy depending on the signal and location.

    Occasionally, these caches will have a travel bug, a type of GPS-enabled tag with a unique ID that the finder/owner can move to any location and place inside another cache. Once the cache has been found, the finder must leave something of equal/higher value in the cache, especially if something is taken from it. The finder marks that she/he found the cache via paper/app/website and replaces the cache in the exact spot for others to find.

    Hmmm. Hide and seek with GPS tools. Exercise with actual pay off. Sign me up!

    Following lunch on Friday, the group got together. I was matched up with Jill, one of the advisors, due to the fact that we both we the only ones in the group who had Android phones. (BTW…interaction designers luv iPhones. Just an observation… ) I downloaded an app (there are a number of apps for iPhone and Android) beforehand and played around prior to the trip. It also helps if you have the latest upgrade of Google Maps on your phone with navigation abilities.

    We traveled to an area in Boulder where we separated into three groups. The group I was in located a micro cache about 0.7 miles north and we walked down some streets to the location. This was located in a parking lot (caches need to be in a public place). The GPS trackers got to about a couple of feet nearby. However, it had snowed a few days prior and the parking lot was covered in snow mounds about a few feet thick. I tried digging with gloves to see if I could find it but to no avail. Everyone searched around the area but no luck. Considering the area was still snowbound and we were finding a small micro cache, we decided to search for another one.

    Looking at the geocaching app, we found another micro cache only a few meters away north. Unfortunately, we came up empty again – a snowbound area with little clue to finding it.

    We decided to move to another one, this time a regular sized cache. This one was about 0.3mi west from our present location. Crossing streets and more parking lots later, we finally got to the location. Many of us recognized the location because of the clue given in the title. Like I mentioned before, the GPS only gives you a close proximity to the location; you still have to figure out where it actually is. Another clue pointed to its actual location. That’s easy enough.

    Well, almost. Now, you have to try to get it out. Remember, these caches were not meant to be easy to find in the first place…

    Finding the Prize

    The tallest of the group reached in and pulled out the prize – one round metal candy tin. After a few hours of searching and coming up with the third choice, needless to say we were all ecstatic (judging from the group picture afterwards). Then we opened our prize.

    Caches contain any sort of object left over from the previous finders or owner. (There are rules to what can or cannot be put there). This one had a simple log book, with list of names and dates, referring when the cache was discovered. One of the more interesting items is a small tale/story in two small pieces of paper, folded in the cache. It also had a squashed penny. This cache had a travel bug, which was taken by our advisor to be placed in another cache. As the custom, we had to put something back into the cache. A few items were put into the cache from other but someone shouted out “Does anyone have a coin?” I had some Canadian coins on me and threw in a “toonie” or a 2-dollar Canadian coin into the cache.

    [By mentioning this, everyone here in the Innate office will automatically reply “Of course, he has one!” Needless to say, I take a lot of trips to Canada.]

    Well after the celebration and signing of the small log book, everything was sealed up and the cache was place back in the same location before. The adviser marked the find on the site, along with the items that we placed in the cache.

    Mission completed. Smiles but tired walking back to the bus and then back to the hotel. For me, the day was complete.

    Now, when I work with information and its display, I’ll remember the connections between data and discovery that are made when geocaching.

    P.S. I mentioned the travel bug earlier. Beforehand, it was revealed that one of goals, besides finding the caches, was to have at least one of the travel bugs moved bit-by-bit to the location of the next conference – Interaction 12 in Dublin, Ireland in February 2012. If this one makes it, it will no doubt be very well received.

    For more information

    Your turn:

    • Have you ever been geocaching?
    • What are your favorite examples of interaction design?
    • How about visualization of information?
  • First Preview of The Information Layer

    First notes from my Moleskin notebook (Project Charles – Volume 1), 5/10/2009 entry

    Notebook (Volume #1 - Project Charles) - First Set

    Notebook (Volume #1 - Project Charles) - Second Set

    First sketch of new information model (same date)

    Sketch (5/10/2009) - New Information Model

    New information model (current view, comparing with previous model) – Basis for “The Information Layer” essay

    New Information Model - Current Realisation

  • London and Back

    _DSC1607

    Just re-entered the country, after a week in London for the last FOWA (Future of Web Apps) conference for 2010. Take a while to settle down (not to mention get back to a regular sleep schedule). Spending a week going through all the notes and ideas from the conference. However, all this information will be slowly integrated for the remainder of this year. I was able to get some sightseeing, as well as going to The National Theatre (Hamlet) and a concert (BBC Symphony @ Barbican).

    Now, going home to DC to a stack full of work at CDG.

    Later.