I think about this as "Engineering is about solving problems. Design is about solving the right problems."
— Ivan Wilson (@iwilsonjr) December 12, 2018
Tag Archive / user experience
A response I gave to Roger Johansson (@rogerjohansson) on Twitter (please read through the whole thread):
From my view, it seems that front-end development has split into two camps: one web-focused, one app-focused.
— Ivan Wilson (@iwilsonjr) July 8, 2017
As I’m writing this, it’s a rainy morning in Vancouver, Canada. Nothing new. Spending time with friends that I don’t see but once a year near my birthday. In this case, I’m here to attend the IA Summit conference this week. This is my first non-US conference since IxDA Interaction 13 in Toronto.
Looking back at that conference there were a number of things that stood out. A number of them became influential years later. One of them was a short lecture by a designer named Nate Archer called “Beyond Responsive”.
Well, four years later, those words seem prescient right now. The world has been filled with all sorts of devices that we access the web. More than just the trio of phone/tablet/desktop. Basically, any device that has access to the web is an access point – from watches to 4000K TVs. But there is another way of looking at this. Instead of “devices”, let us consider going in the direction of “inputs”. Responsive design appeared not just with mobile devices but devices which are also touch-enabled. Now, mobile devices are as ubiquitous as any household device, front-end developers like myself have to deal with coding for interactions that take place on touchpad as much (or even more) than mouse/keyboard. (Though we could be doing a better job at the keyboard then we are currently doing.)
[Note: touch-enabled devices are not necessary phones/tables and doing feature support for touch is still a bit tricky]
In some respects, the beautiful lie of responsive design is that the constraints are visual, via breakpoints and media queries? But what if those constraints aren’t visual. CSS has hidden artifacts describing inputs – media types. If one would look at the specs (https://www.w3.org/TR/CSS21/media.html#media-types), the following types are supported:
screen, print, speech/aural, handheld, tty, etc.
Screen is the most familiar with print/speech following. But there’s tty? From the spec, tty refers to devices like terminals and teletypes. The later was a telecommunication device that has long since disappeared with the advent of email. But back in the day, it was considered important enough to be considered in the W3C CSS spec. Now think about the future. Someday, will we may consider mouse/keyboard interactions as obsolete as teletype?
Now, we are seeing the advent of AI interfaces – sophisticated interfaces that allow access to the same information like we do with mouse/keyboard and touch.
Which comes back to the conference I’m attending, IA Summit. This year’s topic is artificial intelligence and information architecture. The main job of a front-end developer is building interfaces for acquiring information. Obviously, things will change in the next couple of years. But change into what?
One of the highlights of the year was lecturing for the first time at a conference. In this case, CSS Dev Conference at San Antonio, TX. Basically, I decided to take some advice and take a chance. After sending my proposal, I was shocked and thrilled to be selected via anonymous vote in July.
Of course, getting the talk ready was even harder than the waiting. It took months of writing and editing and practice. But I was able to get it together and delivered it a small audience at the conference on October 17, 2016. This talk was about UX, coding, and forms. However, it was peppered with things that I’ve done during the last ten years.
I also want to thank the other speakers at the conference in helping me not only relax but also giving me advice for speaking not just for the first time but also their experiences in giving lectures as well.
And finally, I want to thank Christopher Schmitt, Ari Stiles, and Elizabeth Moore in helping me make my first-time experience as a lecturer a wonderful and memorable one. It means so much when for years I was attendee, to be not only speaking but giving back to the community that I respect.
Thank you all 🙂
(Originally published on CDG Interactive/Innate blog,
edited by Emma Lehmann, republished December 9th, 2016)
Over the past few years, many of you have heard of the term responsive design. Basically, it’s an approach where we build web apps and websites to be usable across a wide range of devices, from mobile phones to laptops.
But instead of tricks, techniques, and more code (there are plenty of basic responsive design tutorial out there), I want to go in a different direction. Instead of asking ’how do we do this?’, I want to ask ’why do we do this?’.
Here at Innate, we recently designed our Accessibility Services website with responsive principles. Instead of answering basic questions, we want to delve into the types of problems we encountered that influenced how the design developed, as these are common problems for all responsive sites. It doesn’t matter if it was one person or a company like Innate; when we all build a responsive site, we are solving these problems at every step of the way.
So, let’s start solving some of these problems.
The Three ’C’s
Responsive design is like solving a puzzle. When someone asks me to describe it, I tell them the following three words: constraints, context, and content.
For brevity, I like to call them the three ’C’s. If I were to give them descriptions:
- Constraints –the conditions, restrictions, and parameters for the given problem
- Context – the situations or circumstances in which you have to work on the problem
- Content – the way in which information is displayed for the given problem
These three ‘C’s are needed in varying amounts when building a responsive site – sometimes, some more than others. But together these are necessary to solve responsive problems. The better the mix of each piece, the better the product. Each part contributes to the whole product.
So, how do we use the three ‘C’s?
Referring to the above graphic, think of Content as water. Constraints as a container. Depending on the amount of content, you will choose the appropriate constraints to display this content. If you have a long article, the viewing on a phone will require more scrolling than that of a wide screen monitor.
With Context, referring back to the water analogy, our problem can change depending on circumstances or environment. At room temperature, water is liquid. But change the temperature and we get either ice or steam. With design, that means adjusting our design to meet these changes.
Like all problems, we need constraints to get the best solutions and designs. Finding constraints allow us to make good assumptions about how to solve a problem. Let’s start by making a few right now.
The most obvious problem in responsive design is conforming to the screen sizes of numerous devices. How do we get a website or app to fit in the screen width of a phone, tablet, and laptop? When we consider the number of devices on the market, we see there is a large range of screen sizes. But by finding constraints, you narrow down the problem, making it manageable.
— Innate (@InnateAgency) June 9, 2015
What is our minimum screen size? Right now we’re not thinking about smart watches (at least not for the purpose of this demonstration), so let’s stop at mobile phones. Apple has pretty consistent design parameters and the classic iPhone has a minimum of 320 px, which is under the range of recent larger models. This is the stopping point on the small end of our constraints scale.
On the large end, we are not considering TVs (at least not for the purpose of this demonstration), so we need an upper bounds for desktop monitors or laptops. Right now, 1400 px is a nice place to put an upper-limit constraint.
Why these numbers? Global stats are a start but the best statistics are a combination of site logs and Google Analytics.
In addition to the extremes, we have a few other number to consider, such as ones for laptops (1024 px) and iPads (786 px). These aren’t necessarily standard, but they are starting points. During the course of building, you will find out the other two parameters – context and content – will affect the constraints. For example, the footer may not look right somewhere between 320 px and 768 px; if this is the case, a new constraint will need to be added. You can really have as many constraints as you want, but the best motto (and the one with less headaches) is the fewer constraints, the better.
Constraints are like conditions, though. They don’t need to be screen size; there are others as well, such as orientations (vertical or horizontal), resolution and pixel density, etc. Other constraints could come from areas beyond your own work (product requirements, project specifications, etc.).
No matter which constraints you are dealing with, they can be influenced by context and content. Let’s look at how this works.
Learn how to adapt to different screen sizes from this informative Intel tutorial:
As you’ve seen, constraints can help define a problem and lead to a solution, but context informs these constraints. Oxford Dictionary defines ‘content’ as:
The circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood and assessed.
For example, taking one device (a tablet), we can drill down in the Analytics data to discover Insights that drive content in learning lessons such as : 1) when users read tablets depends on the time of day at home or work, 2) tablet users are using their device in landscape instead of portrait mode or 3) whether the devices used for certain tasks are Android or Apple, and screen size/viewport.
— Luke Wroblewski (@lukew) May 12, 2015
But more than just statistics, it comes down to research and observation. Taking a device (there are device labs you can check out) and learning about how it handles and comparing with other devices. The experience gained influences how we design/build responsive sites and apps.
Why does this matter? By using context, we can refine our solution and design to better fit a solution to the problem we need to solve. As a consequence, context will define each of our constraints to accommodate user behavior.
Yes, it feels time-consuming, but it’s worth it to create a great user experience. There is still a third element that must be considered, though: content.
At its simplest, content it the stuff that’s on the page. However, with the ever-changing layouts from responsive design, content must become more sophisticated.
In thinking about content, we have to remind ourselves that the core concept of the Web is information.
With the advent of responsive design, we now have a process where we can create websites and apps that can carry the nearly same information without device constraints. However, constraints and context shapes how we view content. It’s not a matter of just hiding or showing words. It becomes a matter of determining what content has the highest priority.
With changes in layout and the concept of the fold largely negated, we needed to rethink how we place content on a screen. Or I should say, we need to reconsider old ideas about placing content on a screen.
For example, the non-responsive NASPGHAN site has complex, deep navigation. Mentally, you would think that the code for this would come at the beginning of the page. However, because the navigation is so large, it would take a vast amount of space before the content the user wants to read. In this case, code for the navigation is written at the bottom with its position displayed via CSS at the top. If you were to use a screen reader/non-visual tool, you could get to the content readily but still get to the navigation via skip/anchor links.
When creating content, keep a mobile-first mantra. Design for the mobile site first and let the constraints of a small environment help select the most important content. Then, as the user gains more real estate or features, give them the extra content. Essentially, just let your constraint and context will be a guide in your content selection.
Prioritizing content creates a flow of information along the screen which aids in how layouts are structured. This, in turn, allows you to adjust content for better solutions to constraints. When creating content for responsive design, the best practices are:
- Prioritization – determining what content gets the highest importance
- Distribution – determining which content is important enough to cause readers to scroll down.
A Collaborative Art
The most important thing that I hope to stress is that responsive design is not a singular effort. As in the jigsaw puzzle at the beginning of this blog post, each project is a combination of the three ‘C’s. A little more content in this project, a little more context in the other. Responsive design is a process that involves many processes and, in some cases, many people to bring it together.
A couple of months ago, I was looking at a promotional video touting some new technology. Something that was written solely for mobile. They went on about their processes, that by focusing not on the desktop, they were saving file size and increased performance, which is all and fine. Anything to life better, especially on those days when I want information without waiting for everything to compose itself during the morning rush hour. But at the end of this, I wanted to ask this question (which, in hindsight, I should have added to the comments):
Why should there be anything difference between the desktop and mobile?
Before answering, think about it real hard.
Don’t worry. I’ll wait…
The current paradigm of mobile is based on two things; the mobile phone and the tablet. But isn’t this the same sort of think we had before – the PC as the desktop. Didn’t we got over this? I got over this years ago, especially when my previous job required me to work with both Windows and Linux.
What I am thinking is that the current paradigm is just as short-sighted.
Let me put it this way? In a year or less, why not see a mobile device become the desktop?
Why not give the "desktop" have touch-enable events like its mobile cousins?
What I am imagining is the mobile/desktop schism not just disappearing. It simply gets redefined.
If my life is nearly almost located within the confines of my mobile phone, why not go all the way?
Our perception of the desktop is that of the monitor tethered to an external hard drive and et. al. What about a rapidly approaching near future where our version of the "desktop" is a mobile device tethered to cloud storage.
Just a thought?
Don’t wait too long.
Sorry for the month delay in blog entries but it’s been a non-stop rollercoaster of travel and events.
Here is the shortlist of what happened in after the last blog entry and a few important things happening this month:
WordPress DC Town Hall with founder Matt Mullenweg
1.31.2001 – Washington, DC
This event happened at Fathom Creative, down the road from CDG Interactive. It was well attended and the sponsor even got an internet stream for online viewing and questions. Matt was really relaxed, calm guest who talked about how WordPress was started (at the heart, WordPress started as a image gallery) and answered plenty of questions about it, running a business, some things in the works/future as well as what could be better. I asked about how it was doing in the mobile world and got the surprising answer (700% in two years! – with apps in almost every mobile platform).
IxDA Interaction 11
2.9-12.2011 – Boulder, CO
Decided to arrive early to this conference (third time for me) to relax in Boulder and get to a workshop for the first time. However, I fell ill Tuesday morning and spent the following 24-48 hours in bed. Missed the workshop but attended the full conference. I even got to attend an after-party on the first night (it was definitely an experience, especially the music and its location inside the Boulder Theatre). This conference was well attended and did not disappoint, concluding with the keynote speech from Bruce Sterling (design critic as well as sci-fi writer).
All the keynotes as well as the individual lightning lectures were all interesting in one way or another. However, the tone, in my perspective as a developer, was different in that were was more of a focus internally than the last two years. Whereas I was more in sync in the last two (especially with mobile coming up big during this duration), this was more internal than anything else.
- Related stories – http://www.ixda.org/interaction/
- For related keynotes/lightning lectures – http://www.ixda.org/resources
One of new additions to this conference was a day for design-related activities. In my case, it was geocaching, where I spend a few hours in the streets of Boulder playing hide-and-go-seek for hidden treasures. (Thankfully, I was well by then!)
I will be doing a CDG blog entry on my geocaching adventure later on this month.
Oh, BTW…it snowed 3-5” and went from single digits (Tuesday) to 60s (Sunday) in one week.
You thought DC had crazy weather!
Nixon in China
2/19-20, 2011 – NYC, NY
I packed bags again the following weekend, for a trip to NYC for The Met’s presentation of John Adams’ 1987 opera Nixon in China. I last heard this opera on CD a decade ago in college but the performance did not disappoint. Well sung by all the performers and John Adams (who conducted his own work) got a standing ovation.
What was interesting about the opera, apart from the music, was the whole scenery. As a person who grew up during the last glimmer of the Cold War, some of the scenes were familiar from all the news broadcasts during this time (you know, when you only had TV and print). The opening scene of the Nixons stepping out of the plane matched the videotape footage to a point where it was eerie. Of course, the big irony, particularly those in the audience, is how much has changed in the almost 30 years that meeting. As a point, during a scene in the second act, Pat Nixon was presented with a jade elephant. The official near her remarked "We can make hundreds of them cheaply!", which was followed with laughter (with a tinge of irony) from the audience.
The second act ended with the agitprop play (the music the basis for a Adams’ stand-alone work The Chairman Dance), which really reminds someone of my age about the old Socialist/Communist displays in the 80’s, or more recently, in North Korea. Of course, the big irony is that so much of that change would start two years later.
Leaving memory lane, I spent a quiet following day listening to three Shostakovich quartet (11th, 12th, and 15th) and Beethoven’s Grosse Fugue, Op. 133 at Bargemusic with the St.Petersburg Quartet. Back home on Monday (President’s Day).
And that was my month of February.
Will be returning to NYC for two concerts:
- March 26 – NY Philharmonic/Avery Fisher Hall for Bartok’s 1st Piano Concerto
— I will be able to say that I have heard all three concertos live – 3rd in DC/NSO (2005) and 2nd in Boston/BSO (2007)
- April 16 – Met Opera for Berg’s Wozzeck
— Heard Berg’s other opera Lulu last year
Hopefully, things will slightly quieter further in the year.
Later (multiple crossing of digits…)