Wednesday, September 30, 2015

Update on Using LiveCode to Build a Word Cloud: A Cloud is Forming!

I spent about two more hours on my little word cloud project. I knew that the next step of building the actual word cloud from the list of unique words and frequencies was definitely within reach. Here's an example of one of the first word clouds I built using the text of Abraham Lincoln's second inaugural address:


There is a "mild" hack at play here. The words simply go to a random spot within this square area. Each separate field containing a word has the script "grab me" on mousedown so that I can easily move the words to a more aesthetically pleasing location. I decided it wasn't worth trying to figure out ways to keeps all of the words from overlapping, etc. However, for an excellent example of how to accomplish this, check out the blog posting by Ali Lloyd on his efforts of building a word cloud. Ali is one of the excellent professionals who work at RunRev (the parent company of LiveCode). Ali's solution is exactly what you think of when you think of a word cloud. It has words of different sizes and colors with different orientations filling every nook and cranny. It's really marvelous. So, many thanks to Ali for sharing this link with me in his comment to my previous blog posting on this topic. I'll be studying his code for some time to come. (And any script that uses sines and cosines makes me want to purr.)

How I Did It


OK, back to my humble attempt. One of the challenging parts to the project was figuring out the step-wise progression of font sizes. The above word cloud looks OK, but I was able to improve the word cloud algorithm in several fundamental ways. All of the code to build the word cloud is in the green button "Build Word Cloud" shown at the bottom of this post, but here are a few key highlights.

Font Size Step-Wise Progression


I perfected the step-wise progression of the font size so that the word with the highest frequency of use had a font size of 96 pixels. In the example above, the font size is directly proportional to the frequency. That was inadequate for many reasons, the most obvious is that the font sizes can be radically different for just for the first few most frequently used words. So, I revised the script so that the second next most frequently used word had a font size of 12 pixels smaller, no matter how fewer times it was used, and so on. Let me explain a little further. If one word was mentioned 100 times, but the next most frequently mentioned word was used only 20 times, then my revised script would give the second word a font size of only 12 pixels smaller. Think of 12 pixels as the height of the "step." From a data visualization standpoint, that skews the proportion in an inappropriate way, but it makes for a more aesthetically pleasing outcome. So, I think it all depends on what the purpose of the word cloud is. For scientific purposes, it is inadequate because it skews the output, but for a quick visual to get the gist of what's going on in a passage of text that is pleasing to the eye, it's fine. I also made 12 pixels the smallest font size that would be used. (In my original script, it was possible to have one word be 96 pixels and all remaining words 12 pixels if the most frequently used word was mentioned an inordinate amount times as compared to all other words.)

Adding Color


I also added the option to pick a color at random using the following code:

    put random (255) into rColor  
    put random (255) into gColor  
    put random (255) into bColor  

    if varColor is true then   
      set the foregroundcolor of it to rColor,gColor,bColor  
    else  
      set the foregroundcolor of it to black       
    end if  

You'll need to scan the code below for these lines in the button script. The first three lines just pick three numbers at random from 0 to 255. These are used to produce a random RGB color if the "Color" option is checked.

These changes produced the following word cloud:


I know my graphic design skills don't qualify me to give any "expert" opinion, but it definitely seems like an improvement to me, at least aesthetically. Again, though, the question of whether you get a more accurate representation of the data is an important question to ask.

Another improvement was the use of the "font step" slider at the top of the screen. When I ran the algorithm with different text passages, I found that 12 was not always the optimal number for the font step. I decided it was better to let the user experiment with this. A final minor improvement, I think, is making sure that the two words with the highest frequency are always shown in black for added emphasis. Here's a screen shot of the card that builds the word cloud.



I Found Two Golden Nuggets: formattedWidth and formattedHeight


One of the wonderful outcomes of building this project is discovering the "formattedWidth" property. This property does all of the hard work of figuring out exactly how wide or how tall a text field needs to be for its contents to fit perfectly within it. I didn't know about this property when I was first building my Q Sort project, so I came up with my own function- very imperfect - to try to do the same thing. I've since updated my Q Sort app to use this property. Here are the two key lines of code that accomplish this feat:

    set the width of field "word object" to the formattedWidth of field "word object"  
    set the height of field "word object" to the formattedHeight of field "word object"  

Next Steps


As I looked at Ali Lloyd's code, it reminded me of Richard Gaskin's advice to me almost a year ago to use the "repeat for each" form of going through a list of data rather than the "repeat with" approach that I have become so fond of. My code that computes the frequency of the words in a passage of text is painfully slow, so I'm now very motivated to get with it and try out the "repeat with" approach. So, look for at least one more update on this little project.

As always, the bottom-line for me is that I continue to learn new things every time I build a LiveCode project, however small. But, is there really any other way?

Script on the Button "Build Word Cloud":


 on mouseUp  
   //Erase any existing word cloud first  
   put the number of fields into L  
   repeat with i = 5 to L-1  
    put i-4 into j  
    put item 1 of line j of field "word frequencies" into varFieldName  
    put varFieldName into message  
    delete field varFieldName  
   end repeat  
   put false into varColor  
   if the hilite of button "color" is true then put true into varColor  
   put the thumbposition of scrollbar "fontdifferencebar" into varFontChangeAmount  
   //Build the word cloud  
   set the movespeed to 0  
   //Determine the largest frequency - this will get the largest font size in the word cloud  
   put item 2 of line 1 of field "word frequencies" into varMaxFrequency  
   put 0 into varFontDifference  
   put field "minimum frequency" into varMinFrequency  
   //This is the repeat loop that will create each word, resize its text size, then move it  
   repeat with i = 1 to the number of lines in field "word frequencies"  
    put random (255) into rColor  
    put random (255) into gColor  
    put random (255) into bColor  
    if item 2 of line i of field "word frequencies" < varMaxFrequency then  
      add 1 to varFontDifference  
      put item 2 of line i of field "word frequencies" into varMaxFrequency  
    end if  
    if item 2 of line i of field "word frequencies" < varMinFrequency then exit repeat  
    //The next two lines determine the area of the screen where word cloud will be built  
    put random(300)+100 into x  
    put random(300)+100 into y  
    //Create the next word for the word cloud  
    copy field "word object" on card "library" to this card  
    hide it  
    if varColor is true then   
      set the foregroundcolor of it to rColor,gColor,bColor  
    else  
      set the foregroundcolor of it to black       
    end if  
    if varFontDifference<2 then set the foregroundcolor of it to black  
    put item 1 of line i of field "word frequencies" into field "word object"  
    //Determine font height for the word  
    //put varMaxFrequency - item 2 of line i of field "word frequencies" into varFontDifference  
    put 96-(varFontDifference*varFontChangeAmount) into varTextSize  
    if varTextSize < 12 then put 12 into varTextSize  
    set the textSize of field "word object" to varTextSize  
    set the width of field "word object" to the formattedWidth of field "word object"  
    set the height of field "word object" to the formattedHeight of field "word object"  
    //Rename the newly copied field as the word it contains  
    set name of field "word object" to item 1 of line i of field "word frequencies"  
    //Move the word to a random spot within the word cloud screen area  
    move it to x,y in 1 millisecond  
    show it  
   end repeat  
 end mouseUp  

Postscript: About My Formatted Code


Ali Lloyd's post reminded me that the way I've been showing code in my blog posts has been really terrible, so I did the obvious thing and googled "showing code in a blog post using blogger" and quickly found a great tool:

http://codeformatter.blogspot.com/

The much improved formatting above is the result.


Sunday, September 13, 2015

Using LiveCode to Create a Word Cloud (Almost) with a Nod Toward Data Mining

OK, it was 5:30 p.m. on Friday afternoon and I was completely spent mentally from a three-hour faculty meeting. I wanted to end the work week on an upbeat note, so I decided to take a few minutes to work with LiveCode. I've spent all of my free time this week working on my Q sort tool, so I wanted to do something brand-new. So, I started a new LiveCode project that had recently been on my mind. On Thursday I had attended an excellent presentation about data mining by one of our very talented doctoral students - Neo Hao - at the Design, Development and Research Conference hosted at the University of Georgia (and chaired by Dr. Rob Branch). Neo's presentation made me think about how easy it would be to build a simple example of a data mining program: a word cloud using the frequency of words in a given passage of text. This post is only the result of about 30 minutes of work. I didn't finish the project, but I was able to build the basics and I think it is an interesting example of using the excellent list processing capabilities of LiveCode. (Interestingly, I worked at least another hour on the program after I started writing this blog on aesthetic stuff, like fonts, labels, and some user feedback in order to make the project "presentable" in a blog posting. That is the way it is always is with software design - the time needed to make a program work is always much less than the time needed to make it usable.)

I'm sure you know what a word cloud is, but if not, here's an example built with wordle.net and based on titles of some of the things I've published over the past few years:



As you can see, a word cloud is simply a listing of all the unique words in the passage with a visual representation of the frequency of the words. The more times the word is used, the bigger its font size. A word cloud is an excellent visual representation of the importance of certain words in a given passage of text, and I think you can get a good, quick, snapshot of what I've been writing about in my published work just by scanning this image. Notice, by the way, the fact that inconsequential words, such as "a" and "an," "the," "and," and the like are not represented. Also notice that punctuation has been stripped out. These become important points for us to consider later on.

Now, don't get your hopes up that I'm going to show how to build a word cloud as elegant as this. In fact, all I've built so far is a little program that takes a passage of text and figures out all of the unique words and computes the number of times they are used. It also strips out all of the words you decide are inconsequential. It also provides some quick summary data, such as the total number of words in the passage of text and the total number of unique words.

Here's a screenshot of the program using Abraham Lincoln's second inaugural address:

[ Get the free LiveCode Community version. ]



How It Works


The card consists of the following main fields displayed left-to-right on the screen:
  • original - this field contains the original passage of text you want to analyze;
  • unique words - this field contains all unique words found in the original passage;
  • word frequencies - this field contains the unique words plus their frequencies
  • ignore - this field (with a reddish background color) lists all words you want to be ignored when identifying unique words.
All of the code is contained within the button "Analyze." The program works with several repeating loops, which I've color coded blue and green:

on mouseUp
   put empty into field "unique words"
   put empty into field "target word"
   put empty into field "word frequencies"
   put empty into field "unique count"

   put empty into field "frequency count"
   put the number of words in field "original" into L
   
   //Search for Unique Words; Remove Punctuation
   put "Finding unique words..." into field "working"  //user feedback
   show field "working"
   repeat with i=1 to L
      put empty into field "target word"
      put word i of field "original" into varTargetWord
      //Strip out any punctuation found in the word
      if the last character of varTargetWord = comma then delete the last character of varTargetWord
      if the last character of varTargetWord = "." then delete the last character of varTargetWord
      if the last character of varTargetWord = ";" then delete the last character of varTargetWord
      if the last character of varTargetWord = "?" then delete the last character of varTargetWord
      if the last character of varTargetWord = "!" then delete the last character of varTargetWord
      if the last character of varTargetWord = ":" then delete the last character of varTargetWord
      if the last character of varTargetWord = quote then delete the last character of varTargetWord
      if the first character of varTargetWord = quote then delete the first character of varTargetWord
      if the first character of varTargetWord = "(" then delete the first character of varTargetWord
      if the last character of varTargetWord = ")" then delete the last character of varTargetWord
      put the number of lines in field "unique words" into LL
      put true into varUniqueWordFound
      
        //Check to see if the word has already been found
      put the number of lines in field "unique words" into LL
      put true into varUniqueWordFound
      repeat with j=1 to LL         
         if varTargetWord = word j of field "unique words" then
            put false into varUniqueWordFound
            exit repeat            
         end if
      end repeat
      if varUniqueWordFound is true then put varTargetWord&return after field "unique words"
   end repeat
   
   //Compute the frequencies of each unique word found
   put "Computing frequencies..." into field "working" //user feedback
   put the number of lines in field "unique words" into L
   repeat with i = 1 to L
      put line i of field "unique words" into varUniqueWord
      //First, look for inconsequential words and strip out
      put the number of lines in field "ignore" into LLL
      put false into varWordToIgnoreFound
      repeat with k= 1 to LLL
         put line k of field "ignore" into varWordToIgnore
         if varUniqueWord=varWordToIgnore then 
            put true into varWordToIgnoreFound
            exit repeat
         end if
      end repeat
      if varWordToIgnoreFound is true then next repeat
      //Ok, begin counting the unique words
      put 0 into varCount
      put the number of words in field "original" into LL
      repeat with j=1 to LL
         put word j of field "original" into varTargetWord         
         //Strip out any punctuation found in the word
         if the last character of varTargetWord = comma then delete the last character of varTargetWord
         if the last character of varTargetWord = "." then delete the last character of varTargetWord
         if the last character of varTargetWord = ";" then delete the last character of varTargetWord
         if the last character of varTargetWord = "?" then delete the last character of varTargetWord
         if the last character of varTargetWord = "!" then delete the last character of varTargetWord
         if the last character of varTargetWord = ":" then delete the last character of varTargetWord
         if the last character of varTargetWord = quote then delete the last character of varTargetWord
         if the first character of varTargetWord = quote then delete the first character of varTargetWord
         if the first character of varTargetWord = "(" then delete the first character of varTargetWord
         if the last character of varTargetWord = ")" then delete the last character of varTargetWord
         if varTargetWord=varUniqueWord then add 1 to varCount
      end repeat
      put varUniqueWord&comma&varCount&return after field "word frequencies"
   end repeat
   
   put "Sorting..." into field "working" //user feedback
   sort lines of field "word frequencies" numeric descending by item 2 of each
   
   put "Done!" into field "working" //user feedback
   wait .5 seconds
   hide field "working"
end mouseUp


The first repeat loop, color-coded light blue, searches for unique words while removing punctuation. It looks at each word in the passage (in the field "original") and checks to see if the word has already been found (in the field "unique"). To do this, a second loop (shown in darker blue) is executed that looks at each word in the field "unique." If it has already been found, the line "exit repeat" is executed and the program stop looking through the unique words already found and goes to the next word in the original passage. It's worth pausing for a moment to study this second loop. As my experience with LiveCode grows, I find interesting patterns in my coding.

The question of "uniqueness" in a given text comes up over and over. Likewise, the strategy that I came up with long ago to identify uniqueness - as shown in this dark blue code - has become a time-tested friend to me. Here's a short explanation:

I begin by creating a true/false variable, in this case I set the variable "varUniqueWordFound" to false. By setting it false at the start, I'm saying that I'm going to assume that the next word I look at is not unique, that is, I expect it to have already been found somewhere previously in the text. In a sense, I'm "daring" the text to prove me wrong. I then do a search of all unique words previously found. Obviously, at the start there are zero words found. Then, as unique words are found, they are added to the text field "unique words." The first loop identifies the words in the original text passage - denoted by the phrase "word i" in the line found near the top of the first loop (in light blue):

put word i of field "original" into varTargetWord"

So, the variable "varTargetWord" contains this ever-changing target word. The second loop compares the target word to each and every unique word already found. If a match is found (at any point) two things immediately happen: 1) I put true into the variable "varUniqueWordFound" because, after all, it is true that a unique word was found; and 2) I exit the current repeat, which in this case is the second loop (in dark blue). So, if a unique word was NOT found, then the variable "varUniqueWordFound" remains false which kicks in the last line of code shown in dark blue: 

if varUniqueWordFound is true then put varTargetWord&return after field "unique words"

That line instructs the program to add the current target word to the field of unique words. The first loop (in light blue) then moves on to the next word in the passage and the operation is repeated.

Calculating Frequency


OK, let's move on. The program now counts up how many times each unique word is used in the original passage. This code is shown in green. As it goes, it also checks to see which words it should ignore as inconsequential, as listed in the field "ignore." (And you need to be careful about what you consider to be an inconsequential word, a point I'll return to at the end of this post.) Every time it finds that word in the original passage, it adds 1 to the variable varCount. As it checks each word in the original passage, it must again strip out any punctuation found with the word. So, I just repeat the code for this operation used above. (Any time you do something more than once, that is an indication that you really should create a custom function, which I really should do if I wasn't so lazy.) After it goes through each and every word in the original passage, it puts the unique word and the frequency in the third field "word frequencies." This step takes the most time for the program to execute.

Finally, I sort the field "word frequencies" in order of how often the word appears from most to least (this code is shown in red). However, I added some buttons that give the user the option of sorting the text also alphabetically.

Along the way, I add some feedback to let the user know how things are progressing.

Next Steps


The program above generates a list of all unique words in any given passage of text and the frequency that they appear. This is the raw material needed to create a word cloud. Although I'm not sure if I'll have time (or inclination) to work further on this project, here are some ideas on what needs to be done next.

Each unique word needs to be put into an object, such as a button or a field. The text size for that object would depend on the frequency of the word. There would have to be a given maximum font size, let's say 24 pt. Whatever is the most frequent word or words would be given that maximum font size. On the other side of the spectrum, all words with a frequency of 1 would be given the minimum font size, which we'll say is 10. Or, one might decide that only words that have been mentioned more than once or twice get displayed in the word cloud. (Lincoln only mentioned the word "Bible" once, but I think its one mention is very significant.) Either way, the smallest font size would be determined for a group of words. Then, all remaining words would be given one of the other levels of available font size, depending on their frequency. So, some algorithm would need to be devised that divides up all remaining words along this font size spectrum, which would not be hard to do.

The tricky part would be to arrange these objects in some sort of pleasant visual display. Frankly, I'm not sure how I would do that!

So, feel free to download my code to this project and give it a try. (And don't forget to send me a copy.)

So, What's the Connection to Data Mining?


This little Friday afternoon project is a simple example of data mining in that it is analyzes each and every word in a given passage and reports some statistics on that passage in a way that I, as a mere human, could not do effectively on my own. It's easy to imagine more sophisticated things you might investigate. Perhaps you are interested in the presence of certain words or combinations of words in a passage of text. For example, I find it very interesting that Abraham Lincoln only used the word "I" once in his second inaugural. (This fact makes it clear that it would be unwise to assume that all small words are inconsequential.) A teacher may find it useful and revealing to analyze essays on a specific topic submitted by all students in a class. You might want to see if some key words are mentioned, where they are in the essay, or how far apart they are in the essay. This is still not a substitute for actually reading the essays, but it's easy for me to see how analyzing even this small mountain of text with cleverly written algorithms would aid the teacher's overall assessment of the class's writing. But, what I think is the most important characteristic of data mining is that the computer will happily do this analyzing for thousands upon thousands of text passages quickly and errorlessly. It will reveal patterns that may subsequently uncover meaning if the algorithm is appropriate and the person interpreting the data is astute.

And, as Neo explained in his talk on Thursday, because of public APIs we all have access to mountains of data from our twitter or facebook feeds. The companies themselves have access to all of these data and you can be sure those companies are mining their data with extraordinary precision. Is their intent noble or malevolent? I really don't know, but it's easy to speculate that it is somewhere in the middle. I find it uplifting that Google could track the spread of flu in almost real-time from the search data of people who were feeling ill, and not in weeks as is needed by the Centers for Disease Control (see the book Big Data by Mayer-Schonberger and Cukier). I think it is time for rank-and-file educators to be part of this conversation.

Appendix: Frequency of Unique Words in Abraham Lincoln's Second Inaugural Address



Sorted by Frequency Sorted Alphabetically
war,11 absorbs,1
all,10 accept,1
we,6 achieve,1
but,5 address,2
God,5 against,1
His,5 agents,1
shall,5 ago,2
than,4 aid,1
years,4 all,10
Union,4 Almighty,1
Both,4 already,1
let,4 altogether,2
do,4 always,1
were,3 American,1
would,3 among,1
other,3 another,1
interest,3 answered,2
right,3 anticipated,1
Neither,3 anxiously,1
has,3 any,2
may,3 appearing,1
us,3 appointed,1
Woe,3 arms,1
offenses,3 ascribe,1
must,3 ask,1
those,3 assistance,1
there,2 astounding,1
less,2 attained,1
occasion,2 attention,1
address,2 attributes,1
Now,2 avert,1
four,2 away,1
public,2 battle,1
have,2 because,1
been,2 been,2
every,2 before,1
still,2 being,1
nation,2 believers,1
could,2 Bible,1
hope,2 bind,1
no,2 blood,1
ago,2 bondsman's,1
While,2 borne,1
altogether,2 Both,4
without,2 bread,1
one,2 but,5
rather,2 called,1
came,2 came,2
slaves,2 care,1
cause,2 cause,2
even,2 cease,2
conflict,2 charity,1
cease,2 cherish,1
should,2 chiefly,1
Each,2 city,1
same,2 civil,1
pray,2 claimed,1
any,2 colored,1
just,2 come,2
answered,2 cometh,1
needs,2 conflict,2
come,2 constantly,1
whom,2 constituted,1
offense,2 contest,1
If,2 continue,1
He,2 continued,1
wills,2 corresponding,1
gives,2 could,2
Him,2 course,1
until,2 dare,1
drawn,2 declarations,1
said,2 delivered,1
second,1 departure,1
appearing,1 depends,1
take,1 deprecated,1
oath,1 destroy,1
Presidential,1 detail,1
office,1 devoted,1
extended,1 directed,1
first,1 discern,1
statement,1 dissolve,1
somewhat,1 distributed,1
detail,1 divide,1
course,1 divine,1
pursued,1 do,4
seemed,1 drawn,2
fitting,1 dreaded,1
proper,1 drop,1
expiration,1 due,1
during,1 duration,1
declarations,1 during,1
constantly,1 Each,2
called,1 easier,1
forth,1 effects,1
point,1 else,1
phase,1 encouraging,1
great,1 energies,1
contest,1 engrosses,1
absorbs,1 enlargement,1
attention,1 even,2
engrosses,1 every,2
energies,1 expected,1
little,1 expiration,1
new,1 extend,1
presented,1 extended,1
progress,1 faces,1
our,1 fervently,1
arms,1 fifty,1
upon,1 finish,1
else,1 firmness,1
chiefly,1 first,1
depends,1 fitting,1
well,1 Fondly,1
known,1 forth,1
myself,1 four,2
I,1 fully,1
trust,1 fundamental,1
reasonably,1 future,1
satisfactory,1 generally,1
encouraging,1 gives,2
high,1 God,5
future,1 God's,1
prediction,1 Government,1
regard,1 great,1
ventured,1 has,3
corresponding,1 have,2
thoughts,1 having,1
anxiously,1 He,2
directed,1 high,1
impending,1 Him,2
civil,1 His,5
dreaded,1 hope,2
sought,1 hundred,1
avert,1 I,1
inaugural,1 If,2
being,1 impending,1
delivered,1 inaugural,1
place,1 insurgent,1
devoted,1 insurgents,1
saving,1 interest,3
insurgent,1 invokes,1
agents,1 itself,1
city,1 judge,1
seeking,1 judged,1
destroy,1 judgements,1
war—seeking,1 just,2
dissolve,1 knew,1
divide,1 known,1
effects,1 lash,1
negotiation,1 lasting,1
parties,1 less,2
deprecated,1 let,4
them,1 little,1
make,1 living,1
survive,1 localized,1
accept,1 looked,1
perish,1 Lord,1
One-eighth,1 magnitude,1
whole,1 make,1
population,1 malice,1
colored,1 man,1
distributed,1 may,3
generally,1 men,1
over,1 men's,1
localized,1 might,1
southern,1 mighty,1
part,1 more,1
constituted,1 must,3
peculiar,1 myself,1
powerful,1 nation,2
knew,1 nation's,1
somehow,1 nations,1
strengthen,1 needs,2
perpetuate,1 negotiation,1
extend,1 Neither,3
object,1 new,1
insurgents,1 no,2
rend,1 none,1
Government,1 North,1
claimed,1 Now,2
more,1 oath,1
restrict,1 object,1
territorial,1 occasion,2
enlargement,1 offense,2
party,1 offenses,3
expected,1 office,1
magnitude,1 one,2
duration,1 One-eighth,1
already,1 orphan,1
attained,1 other,3
anticipated,1 our,1
might,1 ourselves,1
before,1 over,1
itself,1 own,1
looked,1 paid,1
easier,1 part,1
triumph,1 parties,1
result,1 party,1
fundamental,1 pass,1
astounding,1 peace,1
read,1 peculiar,1
Bible,1 perish,1
invokes,1 perpetuate,1
aid,1 phase,1
against,1 piled,1
seem,1 place,1
strange,1 point,1
men,1 population,1
dare,1 powerful,1
ask,1 pray,2
God's,1 prayers,1
assistance,1 prediction,1
wringing,1 presented,1
bread,1 Presidential,1
sweat,1 progress,1
men's,1 proper,1
faces,1 providence,1
judge,1 public,2
judged,1 purposes,1
prayers,1 pursued,1
fully,1 rather,2
Almighty,1 read,1
own,1 reasonably,1
purposes,1 regard,1
unto,1 remove,1
world,1 rend,1
because,1 restrict,1
man,1 result,1
cometh,1 right,3
suppose,1 righteous,1
American,1 said,2
slavery,1 same,2
providence,1 satisfactory,1
having,1 saving,1
continued,1 scourge,1
through,1 second,1
appointed,1 see,1
time,1 seeking,1
remove,1 seem,1
North,1 seemed,1
South,1 shall,5
terrible,1 should,2
due,1 slavery,1
discern,1 slaves,2
therein,1 somehow,1
departure,1 somewhat,1
divine,1 sought,1
attributes,1 South,1
believers,1 southern,1
living,1 speedily,1
always,1 statement,1
ascribe,1 still,2
Fondly,1 strange,1
fervently,1 strengthen,1
mighty,1 strive,1
scourge,1 sunk,1
speedily,1 suppose,1
pass,1 survive,1
away,1 sweat,1
continue,1 sword,1
wealth,1 take,1
piled,1 terrible,1
bondsman's,1 territorial,1
two,1 than,4
hundred,1 them,1
fifty,1 there,2
unrequited,1 therein,1
toil,1 those,3
sunk,1 thoughts,1
drop,1 thousand,1
blood,1 three,1
lash,1 through,1
paid,1 time,1
another,1 toil,1
sword,1 toward,1
three,1 triumph,1
thousand,1 true,1
judgements,1 trust,1
Lord,1 two,1
true,1 Union,4
righteous,1 unrequited,1
malice,1 until,2
toward,1 unto,1
none,1 up,1
charity,1 upon,1
firmness,1 us,3
see,1 ventured,1
strive,1 war,11
finish,1 war—seeking,1
work,1 we,6
bind,1 wealth,1
up,1 well,1
nation's,1 were,3
wounds,1 While,2
care,1 who,1
who,1 whole,1
borne,1 whom,2
battle,1 widow,1
widow,1 wills,2
orphan,1 without,2
achieve,1 Woe,3
cherish,1 work,1
lasting,1 world,1
peace,1 would,3
among,1 wounds,1
ourselves,1 wringing,1
nations,1 years,4







Monday, September 7, 2015

Latest on My Q Sort Prototype: Enhancing the User Experience and Inventing an Instructional Strategy

I have made so many revisions and updates to my Q Sort tool that I hardly know where to begin. The most important point to be made, though, is that all of these changes have been made based on using the app in authentic ways with people. That is, I haven't made changes based simply on my own insights or whims, but on what people have reported to me they find annoying, bothersome, or problematic along with also knowing something about they like about the tool. Both are important because improving the tool is a combination of addressing weaknesses but not ruining the good stuff. (This is my third blog posting about this tool, here are links to the first and the second.)

Many challenges remain, so I'm hardly feeling even close to being done. The biggest weakness remains the total number of statements that I can comfortably put on the screen at any one time. Right now the app can comfortably handle 20 statements, with about 25 being the absolute upper limit. For Q research this is a serious limitation. However, for instructional applications of the Q sort - which is currently my main focus - this is fine. Indeed, for a class activity I think any more than about 20 statements would likely cause some frustration.

A Look at the Current Version


Here's a video demonstrating the version I'm currently using in my teaching - I made this video as an introduction for my students:



The first update is obviously my new, improved, and awesome screen design for my title card. OK, well, maybe it's not so awesome, but hey, check out that big red Q! (Strange how one can be proud of the stupidest things.)

On a more serious note, here are probably the two most important design features that are part of the current prototype.

User Control Over the Text Size of the Statements


The issue of text size is a problem not easily solved given the limited screen space of most computer monitors. I've added two options in response to requests from users with moderate visual impairments (and frankly, I easily fall into this category). The first is the option to turn on and off a magnification window showing in large text the statement most recently clicked or moved. And, although I don't highlight this fact in the video, the user can drag this magnification window anywhere on the screen.

Now, I thought I had "nailed it" with this modification, but one person in my summer class who was the first to call for some adjustment to the statements' text size wasn't satisfied, and I'm very glad she was forthcoming enough to tell me so. What she really wanted from the beginning was a way to increase the text size of the statements directly. I had dismissed attacking the problem head on in this way because I just felt there wasn't room, but I went ahead and explored it. In the end, I was able to give users the ability to increase the text size of all of the statements, though to a limited degree. Most important, this seemed to meet the needs of the people who gave me this feedback.

Another related change was the option to quickly shift all of the unsorted statements either left or right on the screen. This option was the idea of another person in my class and I found it very easy to implement. Even with these changes, the limitations on screen size still makes the sorting activity feel cramped, but they seem to have surmounted the big, early objections of users.

As mentioned in one of my previous blog postings, other computer-based Q sort tools overcome this problem by having the user do multiple sorts beginning with a general sort (e.g. into high, low, and middle groups), then having the user do secondary sorts with the smaller sorting groups. Although this seems adequate, the Q sort literature makes it clear it is preferred for users to have access to all the statements throughout the sorting activity. So, at least for now, I'm trying to stick to this "pure" form of the Q sort activity.

Developing an Instructional Strategy for the Sorting Activity


One of the most exciting developments over the past few months has been the emergence of an instructional strategy to use the Q sort activity within my teaching. This strategy took shape throughout the entire summer and I am again trying it out with my fall design course. The strategy can be briefly summarized as follows:

  1. Collaborate with the students to identify a topic for a Q sort activity. 
  2. Ask each student to complete a short online survey about the topic resulting in a short statement of no more than about 15 words. (I use a simple Google form for this.) I then compile the statements and edit them for redundancy, resulting in a total list of 15-25 total statements to be used in the Q sort activity.
  3. The students complete the Q sort activity on the assigned topic either shortly before the start of an online virtual class session, or at the start of it. (I've had mixed success and problems with each.)
  4. The students then participate in a small group discussion during class using their Q sort answers as a guide for the discussion.
  5. During the small group discussion, I create a summary report of the Q sort data, which is then shared with the class when they return from the small group discussion.
  6. Using the summary report, a whole class discussion is held. Individuals or groups are given the opportunity to make comments as they wish.
Step 5 is a weak link at present in that it is not easy to do quickly with accuracy. I'm using Excel as the analysis tool and I find it takes me at least 20 minutes of focused effort to get a minimal report ready. This is very difficult to do during class, even though I'm not needed during the small group discussions. It's not a problem if I hold the Q sort activity asynchronously prior to class. Ideally, I would program another LiveCode app to do the analysis for me, which I may wind up doing. For now, I'm going to try to build an Excel template that allows me to quickly drop in the data.

The Option for a User to Review Their Choices to Any Previously Completed Q Sort


This feature is critical for step 4 of the instructional strategy, though it wasn't available until recently. Instead, I had asked people to write down their top-most and bottom-most statements, but this turned out to be a bad approach. I now have the app save each Q sort the person completes to a text file on their computer. (Actually, I had been saving this information all along as a redundant "safety net" for research purposes. So, it was a pretty easy to option to create.)

A Preview of the Revisions I'm Currently Working On


I've had some uncertainty with the FTP upload of the data to my web site. My web hosting company had some issues recently which I believe interfered greatly with the last Q sort activity I conducted. Basically, I think my web site was down when several people tried to upload their data. The bottom-line for me is the need to improve the reliability of the app's uploading of the data and alternatively notifying the user of any problems with the upload. I'm currently working on version 2.5 and two key changes are planned.

First, I am instituting a confirmation feature into the app that will actually check to see if the user's data has been uploaded. If it has not, it will automatically try a second time to upload it. If that fails, it will notify the user that there is some sort of Internet connectivity problem and to please try again later. I've finished the programming of this and it seems to be working.

The "please try again later" message relates to the second feature. I will give the user the option to re-upload the data to any past Q sort. I've also finished the programming of this feature, but I'm now thinking that I need to make this feature a bit more nuanced. That is, I don't think I really want the user to be able to casually upload the data manually. For some reason, I envision users doing so repeatedly "just to make sure." That could really make a mess of my source data file on the web. My thinking right now is simply to have the app first check to see if the person's data is "up there," and if it is just to let them know all is well, thus preventing them from uploading a duplicate copy. 

What's In Store for Version 3.0


I now have several month's worth of field trials under my belt with three different groups of people (in three different courses). I'm very happy with the progress I'm making, both with the app itself and with the pedagogy I'm inventing to go along with it. But, I see a big problem as I start to scale up implementation. Without going into detail, my current approach requires me to program a "new edition" for each new group that uses the tool, which obviously will become unwieldy as I scale up. I could solve this problem by creating a web-based database of user accounts, but I really want to avoid that. Why? Ultimately, I want to give this Q sort tool away to other instructors and I don't want them to have to be skilled at database management. I like the simple text file approach with FTP.

My thinking is to use the "Edmodo approach" where each new Q sort is defined and activated for the user with a special code that they have to input into the app. (Edmodo is a cool and free - my favorite combination - online learning management system that is quite popular with K12 educators.) I'll explain the strategy in more detail later if I decide to go this route, but suffice it to say that I could design a single app that would get all its data for a specific Q sort for a special group, thus allowing for unlimited sorts for an unlimited number of groups. The only consequence would be that the user would have to be given a random code to input to access each and every new Q sort. This approach seems to work OK for Edmodo and other apps, so it should work as well for me.