Dana Chisnell: I love this part, because I haven't actually done anything yet. Go green. Audience: Go white. Dana: Go green. Audience: Go white. Dana: Yes, I am a native Michigander. I grew up here at the end of Rochester Road. If you go up Telegraph Road and do a little jog and keep going north until you've reached the middle of the town, you'd get to my hometown. Anybody who doesn't know that this is the secret handshake for Michiganders? You must leave now before the drinks are served. As a Michigander, I can safely say that we are among the least smug people in the world unless you went to school at Ann Arbor. [laughter] Dana: Kidding. Not really. Really though, we are the least kind of know-it-all people that you're ever going to meet. Typically, we always think that somebody else knows more, or knows better than we do. But we slide every now and then, and I'm going to tell you my story about that. I had just finished an amazing usability test session. I know that sounds funny. In so many ways, it was just like so many other usability test sessions that I have ever done, and there have probably been thousands of them, by now. In a very special way this one was different. This one revealed one of the biggest breakthroughs in a design that I think I might have ever witnessed. It all came down to one word. The word was "choice." I walked the participant out of the lab, and I handed him over to the grad student who was helping us check everybody in, and make sure that everybody got paid at the end. Rarely, I have been so excited about discovering the source of frustration for users. I was pretty jaded up to this point. Before this session, the design that we had been working on was presenting participant after participant from doing what they wanted to do. I think we were up to participant say, number 16, at this point. With the revelation from this session, we could remove a major obstacle by changing one word "choice." The team I was working with was about to have a really important conversation about the word choice. I opened the door to the observation room, where the lights were still off. "Is he gone?" Somebody said, "Yeah, I think he's out of the building." I flipped the lights on. The celebration began. The room erupted. Everybody was talking at once. That was amazing. We finally unlocked why people were getting unstuck. This project ended up being the thing I'm most proud of in my entire portfolio. I've been doing this a while now. About a year before this Eureka moment, I got a call. It was from a guy named Drew Davies. Drew was a really nice guy. He's a great designer. He runs a design firm in Omaha, Nebraska. It's called Oxide Design Company. Now, Drew never calls me, even though by this time in our story we've known each other for six or seven years, and we've worked on a bunch of projects together. We're on a strictly IM-ing basis, but on this day, Drew called me. He said, "We've got to do this project, and we've got to do this together, because we're the only people on the planet who know what needs to be done on this project." The project that Drew was talking about was to make a responsive, accessible form. On the surface, it seems pretty mundane, nothing special. In fact, probably, everybody in this room has had some involvement in designing a form. It might even have been accessible. It might even have been responsive. It might even have been both. In this particular case, the form was a ballot. We were, indeed, two of the very few people on the planet qualified to take this project on. Drew's company, Oxide Design, does a ton of branding and identity work, but he was excited about this project because he's an expert on design and all things voting and elections. His specialty is ballot design. He was the head of the most important design research project in the history of ballot design as the head of the AIGA Design for Democracy project. He keeps telling me that it's a privilege that I know him. His team at AIGA spent two years researching and developing a design specification for print and electronic ballots as commissioned by the Election Assistance Commission. This is a US federal agency. Drew wanted me on the project, which I'm grateful for now, because I had done with Jenny Reddish the seminal research about the language of instructions on ballots. This project was also commissioned by another federal agency, this time the National Institute of Standards and Technology. NIST is in charge of voting system standards. Jenny and I learned in that study that voters do, indeed, perform better with plain language instructions, versus the kind of thing that you usually see on ballots, official, jargon-y stuff. By perform better, I mean that they're much more likely to vote the way they intend. People in our study also preferred the plain language ballot without even consciously understanding what that difference meant. The project that Drew had in mind meant writing a proposal for a grant to the Election Assistance Commission, which had released seven million dollars. I'm going to say that again, seven million dollars. That's a lot of money, specifically for research about accessibility in voting. Now, when was the last time you heard about that kind of money being available for anything related to accessibility? We had to be part of this, because this just doesn't happen every day. The question, we did have a question in mind. The question we wanted to answer was this. What if anyone could vote on any device? What if anyone, with any level of ability, could use our form to vote? Right now, if you walk into a polling place in the United States, the voting machine for people with disabilities is off in a corner. It's a separate system. We could potentially, with just 100 grand of that seven million dollars come close to reaching a holy grail of design, universal usability. There's already a lot of research about designing for people with disabilities. There's a lot of work out there about making things accessible for people with low vision and who are blind. In fact, you'd think that vision issues were the only kind of disability if you talked to most software and Web designers, because their test for accessibility is whether somebody can use the design with a screen magnifier, or a screen reader. There's also quite a lot of work in a bunch of different fields, including civic design, on accessibility for people with motor and dexterity and other physical disabilities. When you combine those various bodies of research, the whole covers a pretty wide section of the Bell Curve. What about people on the ends of the Bell Curve of disability, out there on the edges? What about issues like low literacy, and mild cognitive disorders like short-term memory loss? What about accessibility for people whose disabilities might be invisible? Enter Kathryn Summers, the third expert on our team. Kathryn is Professor of Information Arts and Technology at the University of Baltimore. She teaches information design. She's an expert on designing information for people with low literacy. I think this actually makes her an expert on designing for the invisible. Drew and I learned from Kathryn that, get this, 48 percent of American adults read at or below the sixth-grade level, 48 percent. Almost half of everybody you know reads poorly, or not at all. We also learned from her that people with low literacy read linearly. That means that, on a website or in an app, they're going to read straight down the page without ever being aware that there's something on the left or the right. Imagine the implications for that for design. Kathryn had also warned us -- and this was important to the project -- Kathryn had also warned us that, because people with low literacy either avoid reading or read every single word and act on it, that they would take any of the words that we put in front of them literally. My research with Jenny on plain language and instructions on ballads and a lot of research about plain language and technical communication and user assistance that went before gave us guidance on design for this particular form. Best practice said, "Embed lots of assistance. Add instructions about the user interface in the interface. Include illustrations about how to mark the ballot and use the system. Add content about offices and candidates and contests. Write 'Help' to prevent unintentional voting or unintentional skipping." We combined Drew's earlier work on ballot design with my work on instructions and what we thought we understood from Kathryn's work. We took it to the lab to see what we could learn about usability and accessibility on this design for people with low literacy. Now, from the outside, this study looked pretty much like any other usability test. But because we wanted the form to be usable by people with low literacy, we needed participants with low literacy. Because Kathryn had been researching usability issues for this group for some years, she knew where to find people. Many are in underserved neighborhoods. Lots are racial and ethnic minorities. What we did was we sent our grad students out to the poor black neighborhoods in Baltimore, where they asked people in grocery stores and barber shops and parks, you name it, any place they could stop in, to ask people if they'd like to participate in studies at the university, for pay. But you can't really tell -- when we got people in for their appointments -- you can't really tell who can read and who can't by looking at them. And demographics don't predict disability either. We couldn't assume. We administered an assessment called REALM, Rapid Estimate of Adult Literacy in Medicine, as part of the intake process. REALM is a quiz-like instrument. Kathryn had been using it for a while. It's really simple. It's just three lists of words. We asked participants to read aloud each word they knew, and if they got to a word they didn't know, they could either sound it out or they could skip it. Altogether, there were 66 words. If they could read fewer than two-thirds of those words, they were reading at a sixth-grade level or below. It takes a couple of minutes, just a couple of minutes, to do this, and it was part of a bunch of other things that we did when people arrived, like fill out a demographic questionnaire. It was all bundled in there. We got a little bonus here. It turns out that most of these people, in addition to having low literacy, had not used computers or smart phones before. Otherwise, we did what we always do. We looked at the related research literature. We looked at all of the possible competitors that might be out there, or products that had similar functionality. There are lots of apps for school elections and things like that. We generated design principles. We created tons of sketches, even though we had ideas to work from to begin with from the earlier research, and we did lots of reviews. We created a paper prototype that was a monster and hilarious to work with. Then, we created a digital prototype and we tested that. Altogether, we conducted 33 usability test sessions, iterating design between every single session. This was awesome for the moderator, I've got to tell you -- that was me. I didn't always know what I was going to get. By the end, every page had at least 3 and up to 20 iterations. Let me be clear. This story isn't really about ballot design; it's about process, and what the process taught us about best-practice design. What is best practice? We in UX often use best practice to make an argument to our boss that we want something to be the way we want it to be. But we also use best practice to mean a procedure of some kind that's accepted as being the correct thing or the most effective thing. And as a field, we love our best practices. Best practices, and using best practice, seems to suggest a promise. The promise of a so-called best practice is that someone else has made the mistakes already. You, by following these best practices, you can bypass those mistakes. But really, what are we talking about when we're talking about best practices, and how are they different from conventions? A convention is a way in which things are usually done. This is a subtle difference, and also by the way, a convention is usually what people mean when they talk about traditional something-or-other in UI design. Like, "In a traditional app, you wouldn't use a hamburger icon." You mean a conventional one. When you use a convention in a user interface, you're forcing, and then reinforcing learned behavior. That's what that is. Allow me to demonstrate. These that I'm going to show you in this list are best practices, because they're actions on the part of the designer that make things better for users. Things like adding alt text for images, making sure that videos are captioned. This is good for people who are learning languages, giving information in noisy places, and for people who are hearing-impaired and who are deaf. Using high-contrast text to background. This is even more important for mobile than it is on the desktop, I think. Making something obviously clickable. Giving some kind of hint that this thing wants to be touched, or tapped, or clicked. Thank you, OS 7. And including relevant illustrations at the right time. Those are best practices. If those are best practices, then let's go back to conventions. What do those look like? Mostly, they're interaction elements or they're navigation, but not always. And mostly, they're elements that somebody just made up to solve a particular UI problem, a particular design problem. Like scroll bars. We needed to have more stuff on the page. Somebody invented this idea of scroll bars. But how does yours work? What do you have to do to move the content up and down? It's different on Windows from Mac. Does the elevator button work the way you expect it to? Are you reinventing that? Another one that I like is saving files or Save As on Windows versus duplicating the file on Mac OS. Duplicating the file was probably the way it should always have worked, but we encountered this early technical constraint probably that made that happen. Then, one of my favorite things, let's just put the sitemap up in the menu and call it a mega menu and make it available right there at the top. Moving to mobile, let's talk about pull to refresh. I don't know who thought this was a good idea. There is nothing discoverable about this. Never mind. One of my favorite things to hate, actually, is the hamburger menu icon. This is a hamburger menu, three simple lines, looks like decoration in the upper left corner of this app. This is the Shake Shack logo. Shake Shack is a burger restaurant. See the similarities? You tell me which one is more appropriate and more useful. Conventions come from technical constraints. They are also sometimes thoughtfully designed and sometimes thoughtfully tested to determine the effectiveness for users. In fact, Facebook was the first to implement the hamburger icon. Then, they gathered analytics and did testing. Guess what they found out. They found that people could not infer what those three little lines meant. They went untouched. They went unused. Everything under that, it was a lovely menu of delicious things that you could do with Facebook, went unused. Research by catalyst backs this up. Because we confuse these things, this implementation was considered best practice. Facebook did it, it must work. Then it became convention. You see it all over the place now. A lot of us followed it, because it was a shortcut. It was a shortcut to figuring out the best solution to a problem that we were trying to solve in our own designs. What we often call best practices are actually conventions. What we use as conventions are actually shortcuts. We as a field, we are really confused right now. All any of us really wants to do is avoid having to make stuff up for ourselves. Let somebody else go first. What happens when you take shortcuts? In my experience, it takes longer to get to something great. I'm not saying that you have to reinvent whatever anybody else has done. I'm not saying that you have to start over from the beginning. I am saying that there may be penalties for just taking a best practice and blindly plugging it into your design. That gets me back to the lab for our Eureka moment. How about the word choice? We had created a design based on data, absolutely. Data that informed and supported best practices for UI design in plain language. We were all experts in our field. The data came from more than 100 users. They weren't these users. What we learned in that moment was that a simple word, the word choice, was actually the wrong word. We had made a bad choice. We had used it throughout the user interface. After all, when you're voting, you're making choices. There were buttons on the ballot pages that were labeled Review Your Choice. There were buttons on long list of candidates to scroll up and down labeled See More Choices. When you finish marking the ballot, there's a review screen that list all the decisions that you've made. The heading there was Review Your Choices. People were getting stuck on the review screen. For a bunch of sessions, we couldn't tell why. Finally, we stripped enough of the noise away that this 15th or 16th participant could tell us what was wrong with the review screen. When we got to the review screen, he went, "Huh." He sat there for a minute, looked around, scroll a little bit. This was a big black guy. My imitation of him is not going to do this justice. He seemed really puzzled and frustrated, though. Finally, after a really long pause, he said, "I thought I was done. Why are you showing me more choices? These actually look exactly like everything I've already seen. Is this thing broken? I know that you're not done with it yet, but there's nobody new here. Why would you show me new choices now?" I'm going goosebumps again, thinking about it. [laughs] The word choice is modal. To a person with high literacy, it's easy to make the inference to go from choice, meaning you're selecting from among options to choice meaning that you have chosen. By going from the inside of the ballot with all of the races and contest to a summary page listing everyone you voted for, we thought that the format and the layout would be enough to signal the change in mode. We were so wrong. We had wanted to use a really simple non-jargony kind of a word. In doing that, we introduced ambiguity and confusion. This was really ambiguous and confusing for people who carry out instructions literally and read linearly. We changed the heading to Review What You're Voting For. Seems obvious now. It's much longer, but it's a lot clearer and it completely unstuck the design. This was not the only major obstacle that we encountered in the study. It was just the last one that we cleared. There were plenty of lessons learned here. Distractions. Distractions came from illustrations and content about offices and candidates. The illustrations didn't actually help. They got in the way as people tried to interact with them. We had wanted to help people make better decisions, too, but the information that we added about candidates access through the little blue eye buttons, they were everywhere, they just slowed people down, just like now. We didn't want people to have to spend extra time in the voting booth. People were completely derailed by amazingly simple words like choice, and we inadvertently deserted people in a couple of places by not helping them recover from mistakes, because messages were too wordy. There was no way that was obvious to get back to the ballot from the air message. More simple words didn't actually help participants. Fewer direct words did. Things that we thought might feel rude, totally effective. People got stuck on the wordings and buttons. The labels actually needed to be wordy telegraphic calls to action. Shorthand submit, that was just not going to work. Rapid iterative testing and evaluation was our chosen process. After the first testing session with somebody with low literacy, we took a lot of words away. Then, after a couple of more sessions, we took some interactions away. We had expected when we got to the lab for the first session, just make a few refinements. We were pretty confident that we had nailed it. After those sessions and then after a few more sessions, we change the wording because we finally eliminated enough of the frustrating cruft that people could actually tell us what they weren't understanding when they got stuck. Our process was actually all about stripping away best practice, the best practices that we had so carefully implemented, we could simplify what we designed, only we didn't know that was our process at the time. We trusted that the process and the methods that we were using would give us a good result. In fact, the process worked. It helped us prove that conventions were broken for these users, and that some of the best, best practices were yet to be discovered. We stumbled on one of them, and it's got to be what we started to call plain interaction. By this, what we mean is the fewest, simplest steps with maximal focus on the user's immediate next interaction, one moment at a time adding up to a smooth flow and an excellent experience. We started with evidence-based designs, with best practices, with conventions. Voters with low literacy were interpreters of our intent. Through their translating, we got to...what I'm pretty confident is a beautiful, minimalist look, feel, and interaction for what we now call the Anywhere Ballot. Those fabulous participants taught us that we were fooled by best practice and believing that we actually thought we knew what we were designing. In the end, these voters on the edges. On the outside of the curve, handed us our best practices back to us saying, "Not good enough." We were grateful. Thank you. I'm grateful to you. [applause] Dana: Keep up the good work. That was a fun project. Facilitator: We have a little time for questions, and I want to remind the people up top, you can ask questions, too, you've been very quiet all day. What questions do people have for Dana about this? Audience Member: I have a question, which in the process of thinking about the folks with low literacy, in particular, because they are such as an invisible community to us, we don't know who was there. Were there any sort of moments during the testing where you just were blown away by how these people adapt to the world around them? Dana: Oh, yeah. Watching people with low literacy cope when there are lots of words absolutely fascinating, actually. I hated to see the sessions and was happy that we did two rounds of testing, because I learned so much. They would ask questions like, "You want me to read this?" I've heard lots of participants ask that question. That doesn't mean that everybody who ask that question is low lit, but then, they would go on to actively ignore whatever was going on. There is one interaction in this ballot where to prevent you from voting for something by mistake. If you want to change your vote, we don't let you just touch the next other thing. You have to unselect the thing that you selected before and then select, then there's little message that comes up. I swear to God, we went through...I don't know, 16, 18 versions of that message, because the best practice says, "Explain what's happening and then tell people what to do about it." That meant that, there were between 30 and 50 words in this message. But these people were not reading this message, and I would inevitably have to give hints until we finally figured out that it really just came down to five words that were bolded that said, "You have to go back and unselect..." I'm putting in too many words though, "Unselect your first choice and then select a new one." That just focusing the attention made a huge difference in how they performed. There were things like that all through the UI, instructions on every page, what buttons said. Even though these were not computer users, they expected certain things in certain places. Like, "To go forward, the button should be on the right. To go backward, the button should be on the left." That was awesome even though they might not even be paying attention to color, or what the label was. Facilitator: You have a question out there? She'll go first then down here just because they never get to ask questions out there. Audience Member: I'm a non-American... Dana: Where are you? Audience Member: I'm up here. Hello. Dana: Hi. Audience Member: Hi. This may just be me not knowing... Dana: Where are you from? Audience Member: That's a complicated question. Dana: Are you on the run? Are you in a... [laughter] Audience Member: No, not quite. I have dual citizenship. Did you have to account for the ballot being in a different language? Dana: No. In this particular study, we were focusing on English-only. But language parity is really important in the United States actually. There are a whole bunch of counties and cities that have to, by law, make alternate languages available. For example, New York City must make ballot materials and information for voters available in nine languages besides English. Los Angeles County, the largest jurisdiction in the United States, which has 4.65 million registered voters...do that many people live in Michigan? Has to provide language support in 13 languages and English. There are really not good UIs for dealing with that, and most voting jurisdictions are still in paper, it's horrible. It's a big problem. We are about, my partner and I, Whitney Quesenbery and I at the Center for Civic Design are about to, this fall, go do some research with Los Angeles County and with Travis County, Texas, where Austin is, about multiple languages and alternate languages in ballots and voter information. I don't know the answers yet for that, but I'm looking forward to that. We are purposely skewing some of recruiting to people who are a little lit in what the elections world is called limited English proficiency. Audience Member: Hello. I noticed that initially, when you showed us the form initially, it had a lot of best practices on it. Then, for the people with low-literacy, you stripped out a lot of things that you thought were best practices. My question was, in general, how did that affect the results for the typical person that the best practices were set up for? Dana: Right. Outside of low-lit people, performance on this ballot is amazing. The problem I have when I show it to people who are high-lit, who are designers is they go, "Well, this doesn't follow best practice. Where is the illustration?" I'm like, " [inaudible 0:34:43] data shows that stuff got away." Audience Member: It's the UX people who get flustered by... Dana: Yeah. It freaks the UX people out completely. Audience Member: Damn UX people. Dana: Troublemakers. Audience Member: I'm so delighted to hear you presenting this because this voice has been coming up in the wilderness every so often. I remember Tom Landauer talking about stripping out features and unnecessary things in the '70s, but it gets pushed aside by the people who keep wanting to... Dana: When you can have... Audience Member: Illustrations and words... Dana: When you can have a computer screen as big as the biggest LCD television, there's a lot of real estate to fill up there. I think, as a practice, we're all still struggling a lot with how best to use that space, how to transfer those kinds of experiences to smaller spaces, and also taking into account a much wider range of people with different kinds of ability in different contexts. I think the great thing about being in this business as long as you and I have been is that we get to see these things over and over again. All of this has happened before. Audience Member: Do whatever we got to try to make it happen. Dana: Yeah, all of this will happen again. Facilitator: We have a question up there. Audience Member: Hi. I was just wondering, of the people you were able to recruit, did you ask them how many had voted before in elections. If they had or hadn't, what difficulties they told you they were experiencing? Dana: Yeah, we did. In our demographic questionnaire, one of the blocks is all about voting experience. We ask them when was the last time they voted, what kind of system they used to do that. In the opening interview, I would often ask them about their last voting experience. A lot of these people, I'm going to say out of the 33 sessions...I'm doing this off the top of my head, but I could give you the report and you could see the numbers, about a quarter had never voted before. Some had voted. We did this study in February, March of 2013. Some of them had voted in 2012 because they wanted to vote for Obama for president. I'm deeply curious how well that went for them, but had not voted before that. Other people had voted only in presidential contests, in presidential elections before that. Hardly anybody was voting on a regular basis, because there actually are elections going on all the time. Most people don't pay much attention to them, but there's a primary here and there's a primary there and there's a school board here, things like that, not active voters. That was just accidental and it turned out to be great. Audience Member: Going forward, in designs that you're involved in, how will you be taking into account what you've learned about best practice and conventions? Is it going to change the way you start the project? Dana: Probably not [laughs] . Best practice was a really great place for us to start. The problem really for us was that we were so smug about it. We thought that we had this thing nailed. We were really pleased with what we went into the lab with. Thank God, we were required in this project to do usability testing, because in the end, we learned an amazing amount of stuff from that. I think next time around, I get a chance to do something like this, I probably will start in the same place if I can. Like I said, I don't want to have to reinvent things. Nobody does. And why not start from work somebody else has done, but then, go test it with the appropriate people in the right place. Audience Member: Dana, it sounds like a lot of those best practices came from a business context with a certain type of user in mind. A good example where you're helping us as a community to learn more by sharing examples of users that we've never tested with, right, because my company doesn't care about those people, they're not customers, I'll never see them in lab. But it sounds like we could all benefit... Dana: You probably will and you won't know it. Audience Member: Right. What could we do to get some more folks perhaps in this room to give them opportunities to perhaps work on a voting project or something that gets them out of their comfort zones so they can learn something new that they wouldn't expect? Dana: This is Keith. He's my civic design shell [laughs] . Thank you for that. That is an excellent question. We have crazy projects going on all the time. Some of them are really big projects and some of them are really small Flash projects. For example, Josie has helped us out. Josie Scott who's over here in the audience has helped us out a couple of times by being a field researcher in a neighborhood where we just couldn't get to, she helped us give a lot more courage. We just completed a study about poll workers and security. She helped us by being somebody who we could give a field guide to and say, "Go make some observations and come back and tell us what you saw. Collect artifacts, do some interviews." That was great. That was a deep dive day. We also do a lot of Flash things, as I mentioned. Something will come up and some out-of-the-way place or other. Like in 2010 in New York City, the Board of Elections was being sued by a couple of the minor parties basically on grounds that the ballot that was being used in New York City and the voting system were not usable and that they would cause people to vote in ways that they didn't intend. We, Whitney and I and a couple other people, just called up everybody we knew. We emailed everybody we knew in New York, New Jersey, and Connecticut and said, "Can you get to 23rd and 5th Avenue on Saturday, and would you spend the day doing a usability test?" And it was a blast. Now, the problem for me on that particular one was that I couldn't be there. I was in California, I'm watching Twitter as people are checking in. It was like getting election returns back. We do this kind of stuff once or twice a year. I've also done Flash test like that in San Francisco and Auckland. But if you want to get involved at the local level, there are a couple of things that you can do. One is, please sign up to be a poll worker. The average age of poll workers in the United States is 72. Audience Member: That's exaggeration. Dana: No. Now, this is not necessarily bad. But, if you guys signed up, that would bring down their age pretty substantially, and elections have changed a lot in the last 10 years. It's much more IT-intensive, and there are a lot more rules and regulations to follow with voter ID, and other kinds of restrictions. Volunteer to be a poll worker. It's not really a volunteer job, it will probably pay you about a hundred dollars for a day that starts at four or five o'clock in the morning and lasts until maybe 10:00 PM if you're in a good team, might go later on a presidential election. But this is like field research on steroids. It's so fascinating to see the people come and go and watch how that happens. If you can't do that, one dire need in every county I've ever worked with and talked to is proofreading election materials. It probably has happened to you and it will happen sometime in your future that you get a ballot that has a candidate on it that the name is spelled wrong or it's the wrong district on your ballot. Somebody like you could take the checklist from the county clerk and check everything that gets printed. This does a huge help. It's really hard to edit your own stuff. While there's a process for doing it and an exhaustive checklist in virtually every city and county... Audience Member: City and township. Dana: Yes, and township... Audience Member: [inaudible 0:43:20] . Dana: Having somebody from outside can really help do that. Those are my ideas about where to go and what to do. If you would like to get involved in the projects that we are doing, give me a piece of paper with your contact information, and where you're located, and we'll add you to our roster. When something comes up, we will get in touch and see if you're available to help us out. Facilitator: Dana, thank you so much. Dana: Thank you. [applause]