Tag Archive / user interface

  • Revisiting The Art of Responsive Design

    The Art of Responsive Design (Innate Blog)

    About a year ago, I wrote a blog post about responsive design. But instead of the usual techniques, I decided to describe it with three terms – Constraints, Content, and Context.

    A year later, these three terms are more relevant than ever, especially Context. I am thinking about re-editing the post for brevity but the main points will remain.

    Update (12/9/2016): Innate republished the blog post today (thanks!) and it will be the source of a new lecture sometime in mid/late 2017. *fingers crossed*

    The Art of Responsive Design (2017) - Notebook

  • The End of Project Ottawa

    Almost three months ago, prior to me attending the IxDA Interaction 13 conference, I decided to put the project on hiatus for two months.

    Well, that time has passed and I have come to the decision that this hiatus will be indefinite.

    At this point, I am leaving this somewhat open ended because I do not know when I will come back to this, if at all. Some of the ideas here will return in different situations down the road. But as of now, no further work will be done and there will be no third draft. The previous version will still be online (see Second Draft) but no further revisions will be made in the near future.

    I want to thank everyone who helped me along the way, especially during the craziness of last year. Most important, I want to thank all those who let me bend their ears [constantly] about my ideas and gave me some much needed advice. Right now, Ottawa is at a state where I cannot devote any more time and there are other projects that need my attention. In some respects, Ottawa may have been a solution in need of a problem and I suspect it may be a couple of years before it is fully understood.

    Thank you all,
    Ivan Wilson

  • Happy Anniversary, Project Ottawa

    Last year, prior to going to Jonathan Snook ‘s inaugural SMACSS workshop in Ottawa, Canada, I was thinking. There was something that I was on my mind for the past year. Then, [place lightning description here], I got inspiration from looking at some of my old linear algebra books from college.

    What did I do next? I announced it on Twitter:

    I spent whatever free time, post-workshop, working on this in my [first] Moleskine notebook. The early sketches look more like algebra proofs that the visual model that exists today.

    IMAG0433

    However, these sketches and some rules that I wrote down became the basis of the project’s First Draft.

    Preview - New Project

    And as they say the rest is history.

    Though currently on hiatus, I am planning a few more sketches and notes in the current year.

    Hopefully, this project will still be around for year two.

    Happy Anniversary, Project Ottawa!

  • Building the Future, Day 1 – The Beginning

    Like all things, every story has a beginning. In this case, [Project] Ottawa started with the concept The Information Layer (2009). But what came before this?

    Well, it all started in Vancouver, Canada (February 2009) where I saw this film, in a lecture by BERG designer Timo Arnall:

    Wireless in the World 2 – http://vimeo.com/12187317

    In Wireless in the World, they were imagining wireless networks available in the surrounding environment. Now, this looks like an interesting film. But to me, it was a pretty eye-opening experience. You see, up until this time, I only viewed the Web as being static. That is, something that was only accessible from the comfort of a chair and a desktop computer.

    Step back for a moment. Now imagine all those dotted circles representing access points just like one of those desktop computers w/chairs. It would look funny at first but the main point is that each one of those access points is accessing data. They are accessing the same content I am through my desktop computer. If your concept of content is something that is seen through a desktop monitor, what does this do? The concept of having the same content available across all sorts of devices, being available at will – without the constraint of the standard web page format. Even without the author controlling how the information was displayed. The user now has the power not only to access the information but to display it in any fashion he/she wanted.

    That idea of information being free, not in the political sense but in accessibility, really changed how I worked. After that film and the lecture, I decided that my job as a front-end developer was not of creating layouts. My job became a person who tried to build products that allowed for easy access to information. Building the layout with excellent code was simply a means to an end. Improving upon the work simple meant improved access to information. Information, in my terms of my work, is equivalent to content.

    At this point, I was trying to find a way to explaining this way of thinking. It was only a few months later that I was looking at XSLT or XML transforms. Basically, it is a method of taking data in the form of a XML format and transforming into a format resembling a HTML web page. Well, XML is an open format, anyone can use it at will and modify the information to display it in any form they want. We have RSS feeds – XML format data streams that user can collect data and use. This is where all the dots began to connect. You see, XML or JSON, can carry content/information anywhere with the user applying the formatting.

    Going back to this point, I wrote some ideas and sketches which later became The Information Layer. What I realized was that the current UI model was not sufficient – it was simple not granular enough to fully describe what was happening at the time. One of the novel things I did was creating a separation of the Semantic (HTML) Layer from the Information (content) Layer. How important was this? It was very important because it depicted the free flow of information /content. It also displayed the fact that HTML has its own sense of meaning, which was further expanded with HTML5 semantic tags a few years later. This was not a new concept but was not fully realized until now.

    And so, that was the beginning. From here, I used this model for building my work.

    As I mentioned in an earlier blog entry, Project Ottawa is simply the first practical application of the model. This was revised recently to deal with the concept of content, which will be the main focus point of Project Ottawa/Third Draft.

  • Gesture-Based CSS Selectors

    [This is a short, informal proposal of a concept, with a more extensive version in the near future. However, this should be enough for starting discussion on its value/implementation.]

    What are Gesture-Based Selectors?

    Gesture-based CSS selectors are a combination of the current set of CSS selectors/pseudo-selectors to include new set of gestures currently being used in mobile (tap, tap hold, swipe, etc.). The main goal is to give basic interactivity to elements without JavaScript support.

    The closest analog would be the same sort of relationship with CSS3/JavaScript animation. In the current methodology:

    • CSS3 – basic/simple animations, removing the need for JavaScript resources
    • JavaScript/JavaScript Library via Events – complex animations, requiring more functionality beyond CSS3 animations

    Initially, this was just focused to touch gestures. However, this can be also be extended to other "gestures" such as the current JavaScript events "click", "mouse(over/out)", "key(up/down)", etc. This will allow for non-touch gestures → keyboard, voice/speech, inputs from any other devices, etc.

    CSS/Gesture Selectors Format

    The concept is to have these represented in CSS code by the following selector (in two versions, using the mobile gestures swipe left and tap hold as examples):

    1. selector:gesture-(gesture name)

      Similar to :hover, :focus pseudo-selectors (using current

      Examples:
      div:gesture-swipeleft { CSS code }
      a:gesture-taphold { CSS code }

    2. selector[gesture=”gesture name”]

      Similar to attribute selectors, introduced in CSS 2.1 but in use more with CSS3

      Examples:
      div[gesture=”swipeleft”] { CSS code }
      a[gesture=”taphold”] { CSS code }

    Constraints/Problems

    At this point, three initial problems appear:

    1. New Devices/Platform w/ Gestures – some sort of path/procedure needed for recognization of new gestures that is open and flexible. I recommend the following for an initial path:

      Device/Platform Support → JS event support → CSS/gesture selector available

      This will allow for the availibility for selectors mention above. However, there is the problem of device-specific gestures, which could mean that certain selectors will be available for only those devices. Currently, the touch events are only available on touch enable devices. If a [front-end] developer was writing code for a non-touch enabled device, the JS support would be non-existant -> no CSS/gesture support

    2. Gesture Uniformity – if a gesture gains support across platforms/devices, the related CSS selector should never be prefixed as we currently have for CSS3 features like animations or tranforms, ie "-iphone","-android", etc. I recommend this because unlike CSS3 features, these are OS based issues ("native"). The selectors will be device/platform "neutral" – let platforms/devices be solely responsible for interpretation.
    3. Computer Processing – if this gets implemented, how does this effect processor/power usage. How does the current set of CSS3 features (transitions/animations) deal with different computer processing power (hardware accelerators?)

      If this gets added to the current set – will there be a bigger demand for hardware processing? What would this mean for small, less powerful devices? (the drive behind iPhone design and construction).

      Overall – This is an issue that will need to be taken up post-proposal, with people more knowledgeable than me.

    In summary,

    1. Creation of CSS/gesture selectors to allow for basic interactivity, without the need of JS events
    2. Providing a “path” to allow for more gestures, not just current but for future devices/platforms and be platform/device independent.

    Flickr – Scans of hand-written notes from Polaris notebook