Friday, August 18, 2017

Lloyd's Q Sort Tool: Version 1.0 Just Released

I've been very derelict in providing updates on my project to create a digital tool to allow instructors to use Q sorts in their teaching. I suppose my only excuse is that I've been focused these past three months on actually doing the design work and coding to make it all possible. Well, I'm happy to announce that version 1.0 of the my app - Lloyd's Q Sort Tool - is now available. [Cue the dropping of balloons and confetti.]

Part of the task has been the monumental effort to also create a web portal for instructors to use. Although I had designed the mySQL database carefully over a year ago to allow for instructor accounts, I had never actually created anything more than a crude administrator interface for me to do the behinds-the-scene database tasks to support the field tests I have been running. And yes, this was a very ugly site. So, I've been working on developing an attractive and (hopefully) intuitive web portal for instructors while also making the final revisions of the app. Oh yeah, I was also teaching two courses for the University of Georgia this summer at the same time.

But, I am now ready to make instructor accounts available on a very limited basis. [Cue the triumphant horns.] It is really only the beta of version 1.0 that has been released because I feel it is necessary to do much field testing of the app before I'm confident that I have finally reached a true version 1.0. If you are one of the few who read this blog and one of the very few who might be interested in using Q sorts in your instruction, please feel free to go to my Q sort project's web site to request an account:

http://www.nowhereroad.com/qsort/

Of course, if you are not an instructor, but would like to check out how my Q sort tool works, I invite you to download the app and try one of the public Q sorts I've included on my web site.

Here is a video demonstration of how the app works:


Summary of the Most Significant Revisions to the App


I need to provide separate blog postings to spotlight and explain each of the big revisions to the app, but here is a quick summary of the most significant. Some of these represent true breakthroughs in my design.

Breaking Through the 20 Statement Limit


For the first two years of the project, my design limited all Q sorts to no more than about 20 statements. Although this proved sufficient for all of my field tests, I knew that this limitation had to be overcome at some point. So, this past spring I mounted a design effort to find another approach. The solution came in the form of dynamically collapsing and expanding statements as they were sorted on the grid. This allowed me to switch to a more standard design of the Q sort board which resembled an inverted normal curve. I did keep the sideways orientation to the format of this board, which helped to leave the majority of the screen open for the statements. However, this approach triggered other user experience challenges and it was only through feedback from people who participated in some critical field tests that I was able to find a way for this to work.

Creating a Sorting Sandbox to Encourage Initial Grouping of Statements


It is standard practice within Q methodology to ask participants to first group all statements into three general piles: most agree, least agree, and neutral. Although my first prototypes advised people to do this, it never really worked well given the way the screen was organized. I found a way to persuade users to do this preliminary grouping which provided a solution to the limited screen space. Along the way, I think I found a unique design approach to doing a preliminary grouping. I added three long strips at the top of the screen, marked respectively as high, neutral, and low. Participants are advised to take each statement and do a "gut sorting" of it into one of these three groups. The unique design is that the placement of each statement, from left to right, on each of the three strips, gives a priority ranking to the statements in that strip. I have never seen this added dimension in any of the Q sort designs - paper or digital - that have been developed. I then added a 1-click option for participants to move the statements in any of the three strips into the main area of the screen in an expanded form and in the same order in which they appeared in the strip. The important thing to note about this strategy is that the initial grouping effectively removes all of the statements from the main area of the screen, thus opening this space for the statements to appear when when the 1-click option is used. In this way, the limited screen space is effectively used even if a large number of total statements is used in the Q sort. I also programmed an auto-magnify feature to immediately show the full statement when the participant mouses over any of the statement numbers in the sandbox.

Auto-Numbering of the Statements


I also programmed the list of statements to be auto-numbered. This provides a secondary cue to which statement is which. I combined this with the option to have a pop-up window to list all of the statements. Furthermore, this list can be copied to the clipboard for pasting in another application, such as a word processing document, to allow the participant easy access to all of the statements along with the statement number. This helps participants to cross-reference statements to their numbers.

Improving and Extending the Q Sort Analysis


I made a key decision about a year ago to put the analysis directly into the app itself. Before then, the analysis was done with a separate app I built meant just for the instructor.  This decision put the analysis data directly in the hands of students with real-time access to the data. That is, the student could check to see how many responses had been submitted and run an analysis on the responses submitted so far.

More recently, I extended the analysis by adopting a method very much aligned with Q methodology. Although I have long had an analysis option called "Are You Like Me?" which was based on difference scores calculated between each pairs of people who completed the Q sort, this never proved to be an effective means of getting people to engage with each other. This summer I programmed this option to also provide correlation coefficients along with noting which were statistically significant. Although I have yet to field test this revision, I'm optimistic that it will make people more likely to both talk to those who share their views and talk to those people who do not.

Overview of the Instructor Web Portal


Creating the instructor web portal was a challenge, not so much in terms of programming it, but in designing it to sufficiently explain how everything works to an instructor who has never met me or has never heard of a Q sort. Fortunately, the instructors to whom I've already given our accounts are people who heard me speak at various conferences about this project and asked permission to use the app in their teaching. I had to kindly tell them that the project was not yet "ready for prime time" and that I could not yet honor their requests. But, I added their names to a list of interested folks.

I provide some extensive guidance to instructors on how to use the site. I also created the following video demonstration:


The Difference Between Q Sort Definitions and Q Sort Activities


There are some key concepts that, if not well understood, will make everything else confusing for instructors. The main one is probably the difference between a Q sort definition and Q sort activities. I won't give the long answer here, but the difference has allowed me to take a smart approach to creating Q sorts in a way that should save instructors lots of time. In short, one first creates a definition of a Q sort that contains almost all of the information about the Q sort. Then, one creates one or more activities that are linked, or bound, to that one definition. In this way, one use and reuse a Q sort definition dozens or even hundreds of time without have to go through the time-consuming task of creating yet another Q sort.

Next Steps for the Project


It's been almost two and a half years since I first wrote about my decision to create a digital version of  a Q sort in April 2015. After hundreds of hours of coding and more than a dozen field tests, it feels so good to finally put this project "out there." Yes, I have built it, but will anyone come. To be honest, I only want a small number to request an instructor account so that I can identify and fix errors and problems which are inevitably present.  My hope is that eventually at least a small community of instructors will form who are interested in exploring ways to capitalize on students' subjective perspectives in teaching. I have charted an initial strategy for their use, but I know there are many other creative approaches yet to be identified.

I look forward to seeing how the project unfolds. I plan on presenting this project at upcoming conferences, beginning with the small (but wonderful) IDD@UGA conference on August 19, 2017. But, I will be presenting this project at the upcoming conference of the Association for Educational Communications and Technology (AECT) in Jacksonville, Florida in October. I also have submitted a proposal to present the project at the American Educational Research Association (AERA) in New York in April 2018. I also intend to submit a proposal to the Conference on Higher Education Pedagogy (CHEP) held each February 2018 at Virginia Tech to provide an update of the project.

I have spent a lot of time on this project. I hope it proves to be useful to teachers, instructors, and trainers.




Monday, July 10, 2017

The "Any 3 for 15" Game and the Courage to Start Over

I'm again teaching two courses for the University of Georgia this summer. One is a design course and recently we were talking about accessibility. I've tried to broaden my students' understanding of accessibility. I want them to see accessibility more as a design paradigm rather than a set of mandated technical requirements. That is, instead of seeing accessibility as nuisance for a designer, I want them to see accessibility as an opportunity to improve their designs.

One little activity I've used over the years is a cool little game that demonstrates the power of representation, namely that the way a concept, principle, or procedure is represented can dramatically alter the ease of learning it.

The rules of the game are simple. There are two players and each takes turn picking a number from 1 to 9. Once a number is chosen, it is unavailable to be chosen again. The player with the first hand with any three numbers that equal 15 is the winner. It's a surprisingly challenging game to play. As we play it, people begin to have the feeling that they've played the game before. There is a big reveal at the end. It turns out that this little game is really just tic-tac-toe (also known as naughts and crosses), but in mathematical form. The game is fully revealed when the numbers are shown in this pattern:


This is commonly known as a magic square. Notice how any row, column, or diagonal adds up to 15. Obviously, adults don't play tic-tac-toe any more because once you know the secret to choosing the center square, there is no way you can lose. However, even after the big reveal, the math version of the game is still a challenge to player. The two versions of the game have exactly the same rules, but the representation of each is dramatically different. My hope is that students will take this point to heart and try to find the best representation for their designs.

Creating a Version of the Game with LiveCode


I've had groups play this game for years using PowerPoint. I would just stay in edit mode and manually move the numbers around as the game is played. However, it occurred to me a few hours before class that I could create a quick version in LiveCode that would be easier to manage. Forty-five minutes later, I had a nice working version of the game. I still had to mentally keep checking to see if one of the players had three numbers that added up to 15. After class I thought it be great to program LiveCode to do this checking. Well, that began a design saga that took me over a week to complete.

Identifying All Unique Groups of Three Numbers


The algorithm needs to check all possible combinations of three numbers in the player's hand after each turn. I saw a couple of way to handle this problem. One way is to use a well-known formula to determine the total number of possible combinations, then turn LiveCode loose spitting out random combinations until all of the unique ones were found. Here's the formula:


r is the number of numbers in a given combination, which is always 3 in this case.

n is the total number of numbers in the list, which is the number of numbers a player has drawn up that point.

Obviously, the player has to have at least three numbers before you start calculating. And, the most numbers a player will have is 5 given that the players take turns drawing numbers. The formula above indicates there are only a total of 10 unique combinations of three numbers in a list of 5 numbers. (Order of the numbers obviously does not matter.)

A better strategy - and the one I selected - was to figure out the algorithm first, mapping it out step by step on paper. It's pretty simple:

Take five numbers: 1, 2, 3, 4, 5

The pattern of combinations can be determined as follows:

123
124
125
134
135
145
234
235
245
345

Yep, there are 10 of them. Do you see the pattern? There are three loops here corresponding to each digit in a row. Believe me, it is very easy to get totally confused in trying to troubleshoot a problem with an algorithm having three interwoven loops - it's like trying to balance three spinning plates on sticks. Once you focus on one, the others are prone to fall.

I spent about two hours on a Sunday afternoon working on this without success. I came back to the problem that evening, only to find myself thoroughly confused. So, I started over.

On the Reluctance to Start Over


I had about three hours invested in an approach that was not working. At times, it appeared I was tantalizingly close, just to have the algorithm fall apart again. The idea of starting over after having invested so much time is a tough pill to swallow. And yes, I think it does take some courage to admit defeat and start from scratch.

I turned to a favorite strategy where I build a new "toy app" from scratch that focuses solely on this one problem. In this virtual sandbox, I find I can focus better on a thorny problem. It really helps to start with a blank canvas and not have all of the other stuff from the bigger project jumbling around in your head. It was a busy week, so it was only Wednesday when I got round to doing this.

Unfortunately, my second approach yielded the same confounding results. And I again decided I needed to start over. Fortunately, all of the code this time was contained in a single button in a very simple stack, so I just moved that button off to the side and made a new button. I worked on this Friday night and some of Saturday. And, I again got myself into a brain splitting knot.

Fourth Time's a Charm


I again decided to start over. Although I had basically the right idea behind the loops, my approach to ending one loop and starting the next was flawed. The week's efforts convinced me that I needed to focus on the most concrete representation of the problem I could muster. So, instead of working with ever growing looping lists within variables, I used fields as containers. This gave me a visual form to seeing how the algorithm worked, not unlike the tic-tac-toe example.

I don't think I can sufficiently or succinctly explain my solution here, but suffice it to say it was a joyous moment when my program finally worked.

The lesson to take away here is sometimes the wisest way to solve a problem is to abandon an approach -- regardless of the amount of time already invested in it -- and start over.  Knowing when to throw in the towel is tricky, but I guess that's where the wisdom comes in.

Play the "Any 3 for 15" Game


(Alert! Depending on your Internet speed, this can take a full minute to download.)

And yes, the game title definitely needs work. Right now, it sounds like a selfish version of the Three Musketeers motto (i.e. "All for one and one for all").

A Final Note: Computing Factorials with LiveCode


I mentioned that I flirted with the idea first of using the formula above as my starting point for my solution to this problem. I spent about 15 minutes trying to program LiveCode to compute the factorial of a given number. I would have eventually figured it out, but fortunately,I stopped and did a google search and found a very elegant solution:

The example comes from this web page:

http://docs.runrev.com/Control-Structure/function

To quote from that page:

"A function can call itself. The following example calls itself to compute the factorial of an integer:"

 function factorial theNumber  
 if theNumber <= 1 then return 1  
 else return theNumber * factorial(theNumber -1)  
 end factorial  

My solution would have been quite convoluted in comparison. And, I learned something new about LiveCode as a result.

Thursday, April 27, 2017

Creating an Instructor Version of my Q Sort App

I am again attending the annual conference of the American Educational Research Association. This year it is located in San Antonio, Texas. I will be presenting a poster session about my research on instructional applications of Q Methodology. I think I have finally found a suitable title for this research:
Embracing Student Subjectivity: Using Q Sorts in the Classroom
I think this is more accurate - and more interesting - than titles I've used in the past (e.g. "Adapting the Q Sort Research Methodology for Instructional Purposes"). I'm also quite proud of my poster if only because I managed to build an interactive component into it, something I've yet to see in an AERA poster.



I'll have pizza statements that people can move around the Q sort board in the lower left-hand corner of the poster.

I also just realized this is my first blog post for 2017 - quite a long hiatus. However, this does not represent my LiveCode work over the past four months. I've been involved in many LiveCode projects, including updating my video analysis tool and building an agricultural education prototype. So, there is lots to write about in the coming weeks and months.

The topic of this post is about some of my latest Q sort app development work. I've field tested my Q sort app and emerging instructional strategy many times over the past few months, most notably in an undergraduate course in environmental health. Here's a quick reminder of what the app's home screen looks like:



The app and my approach have worked very well. But, one thing I have found myself needing is an instructor version of the app to be able to do the following things:

  • Demonstrate for students how a Q sort works, but not have the data for that demonstration uploaded to the server.
  • Analyze any given Q sort without actually completing the sorting activity myself.
At first, I just made some quick hacks in a copy of my "regular" Q sort app. However, I came to the conclusion I needed to devote some serious time to creating a viable instructor version. One other good reason to do so is that I'm very close to entering the next phase of this research and development activity, namely giving other instructors their own accounts in order to use my Q sort app as they wish.

A Key Variable: varInstructor_version


Rather than create a truly separate app just for the instructor, I figured the smartest thing to do was simply to build into the existing app a way to quickly allow it to be transformed into the instructor app. That way, I don't have to keep up with two apps. Remember, the current app is still very much a prototype. I'm sure lots of revisions and enhancements will be made over the coming months. So, I created a global variable "varInstructor_version" that is set to either true or false. If false, the app works as the student version. When true, it works as the instructor version. All I need to do is set the variable, then create a standalone for whichever version I want at the time. When set to true, the home page will show one small, but significant checkbox:



If the instructor leaves the box unchecked, the app works just like the student version. If checked, another variable - varInstructor_mode - is set to true. Two things then happen. First, another button appears titled "Analyze a Q Sort":


Clicking this button will give the instructor a pop-up window to enter any Q sort code. The app jumps immediately to the card showing the list of people who have completed the Q sort along with buttons to do the analysis.

Second, all data upload scripts are disabled within the app. That way, the instructor can enter the Q sort code in the regular input box (the same one the students would use), then go through a complete demonstration of how to complete the Q sort. The following message is shown in the final screen that says the data are being uploaded:


The phrase "not really" cues the instructor that their data really has not been uploaded.

Final Thoughts


I'm very excited by the prospect of soon giving other instructors the option to try out my Q sort app with their students. I gave a presentation about my work with Q sorts at the annual Conference on Higher Education Pedagogy at Virginia Tech back in February. The turnout was good and the main question I received afterwards was "Can I use it with my students?" I promised everyone that I was working on it. Likewise, I hope my Q sort work is as well received by folks who come to my poster session at AERA. However, AERA usually brings a much tougher crowd to sessions. Fortunately, I walk by the Alamo each morning on the way to the conference, so I'm inspired to be "armed and ready" to defend my work. Of course, despite the heroics of Davy Crockett, James Bowie, and Colonel William Travis the Texans lost the battle. Very reassuring.