Sunday, December 27, 2015

Exporting LiveCode to HTML5: This Could Change Everything

One of the most recent and exciting developments in the LiveCode world is the ability to export LiveCode projects to HTML5. It's only available in experimental form currently, known as a "developer's preview" or dp version of LiveCode 8. The current version is dp 12 - dp 1 was released in August, 2015. When it was first released, I spent about 30 minutes playing around with it and immediately experienced some problems. I just tried again and, to my amazement, I was easily able to export one of my early prototype games - Crack the Code! - to HTML5. (Here is my original blog post about it.) Here, try it for yourself:

Alert! It could take a little while to load.


http://lrieber.coe.uga.edu/livecode/html5/Crack_the_Code/Crack_the_Code.html


The original game had sound and the option to change the word list. Unfortunately, neither of these options is working. I'm hopeful that I just don't know some important information about how to package sound for HTML5, so I'll probably post a question about this to the LiveCode forum dedicated to HTML5. (Sound provided very important feedback to this particular game, with a pleasant chime given when you correctly matched the secret word, and an appropriate "crash" when you did not.) I didn't see any mention of sound issues when I browsed the forum, but I did find some useful information about why cut/paste will not work. Apparently, there are strict and narrow restrictions on what elements of a web page can access the clipboard due to security issues. I read one person's suggestion for a workaround, so I might try that in an update. The amount of time it takes to download the HTML page is causing me some concern also. As I understand it, the entire LiveCode engine has been converted to javascript, so it's a big download. On my slow Internet connection at home, I had to wait about a full minute for the page to fully load.

Still, it's absolutely amazing that this works without the need for any sort of plug-in. It's all done using HTML5 and javascript. I made no changes to the program other than cosmetic. HTML5 is simply an export option. You select it, then choose to save as a standalone application. A folder is created with all of the needed resources, including a HTML page that runs the show. Upload that folder to a web server and you are ready to go. The LiveCode program is converted essentially to a javascript module.

Far From Perfect


The good folks at LiveCode have been careful to alert us that this feature is far from perfect and that there is still much work they need to do before we go from a developer preview version to a stable version. I have had some issues getting this project to work. Everything works fine when I run the HTML5 project locally, but it only works on the Internet when I run it from one of my accounts on a University of Georgia server. I have access to other servers, most notably the one housing my NowhereRoad.com site, but when I try running it from there, I get the error "Exception thrown, see JavaScript console." I presume that means my NowhereRoad server is not set up properly to run javascript, even though I thought it was. I obviously need to check into this.

My Q Sort prototype is an excellent candidate for HTML5 export. I tried exporting it, but a key feature does not work - the saving of data into a local text file. It would be fantastic if I could get this working as an online tool. I suspect that many programming techniques I've learned and now use for many of my projects will not work in HTML5 without substantial reprogramming. But, learning new techniques is all in the spirit of this blog. And, perhaps I'll uncover and report some important bugs or needed features that will help in the overall effort by the LiveCode team.

How Does This Change Anything?


The inability to deploy LiveCode projects over the Internet has been LiveCode's main distribution gap. But, it has been a huge gap - a bona fide gaping hole. I began using LiveCode when I had an interest in creating native mobile apps about six years ago. It's easy to develop iOS apps for iPhone and iPad, and even easier to develop for Android devices. It also does a great job if you want to distribute to desktop or laptop computers running MacOS, Windows, or Linux. But the inability to distribute over the Internet has been a serious limitation. Frankly, most of my students have had little interest in learning LiveCode precisely because they could not distribute their projects over the Internet. Similarly, their interest in Articulate Storyline has primarily been due to the fact that exporting projects to HTML5 is Storyline's strong suit.

I have written here before about my fundamental criticism of authoring systems such as Articulate Storyline and Adobe Captivate. A short recap is simply that although these software applications do some things extremely well, their underlying structure seriously narrows or constrains the range of software designs possible. Perhaps I should say that this comment is really not meant to be a criticism, but an observation. What they do, they do very, very well. But, if you have a creative or innovative idea for software design that doesn't fit the tight boundaries of these authoring packages, you are out of luck. Again, one of the things these systems do very well, especially Storyline, is deliver HTML5 compatible tutorials. Consequently, almost all of the projects that my students design are of this ilk. The important point to be made here is that their design ideas are all shepherded down a very predictable, narrow path.

Fortunately, it's easy to embed the HTML code needed to run a HTML5 LiveCode project into an existing web page, including one created by Storyline or Captivate. Here is an example of the minimum code needed for an HTML page to make it all work:

 <html>  
   <body>  
   <canvas style="border: 0px none;" id="canvas" oncontextmenu="event.preventDefault()"></canvas>  
    <script type="text/javascript">  
     var Module = { canvas: document.getElementById('canvas') };  
    </script>  
   <script async type="text/javascript" src="standalone-community.js"></script>  
  </body>  
 </html>  
If you play "Crack the Code!" you will see that it is enclosed in a default web page that was created by LiveCode during the export process. But, I hope you get the idea of how easy it would be to include a LiveCode app within any existing HTML document. So, one can envision a software design process that includes a suite of tools, such as Storyline and LiveCode.

Final Thoughts


I want my students - and all instructional designers - to be able to live up to my motto for why to learn computer programming: "If you can imagine it, you can build it." If LiveCode can perfect the export to HTML5 option, I think it will become a serious competitor to Storyline and Captivate. More importantly, it could lead to much higher quality - and diverse - software for learning in online environments.

The title of this post is obviously meant to convey tentativeness. The direction LiveCode is going could be a game changer in the world of creative online software design, particularly in educational or instructional contexts. However, we are not there yet, so it's still too early to pour the champagne. But I'm optimistically keeping a bottle chilled.



Monday, December 21, 2015

Are You Like Me? More On Creating a Custom Tool to Analyze Q Sort Data

This is a follow-up to my recent post describing a tool I created to analyze Q sorts. In that post I briefly called attention to a button on the analysis screen titled "Are You Like Me?":



In this post, I will explain how this special analysis works. Although this analysis option is specifically meant to support my goal of using the Q sort process within an instructional context, the idea behind it originated much earlier, and the analysis itself can be applied to other survey data types besides Q sorts. So, allow me to give a little background first.

Those in the field of instructional design like to think that one of its strongest elements is learner analysis. This is a major phase of instructional design and is well documented within all of the best known models. Even though we devote a lot of attention to learner analysis in the literature, my own opinion is that we don't practice it well. Yes, we are fond of giving pretests and surveys, but these usually only provide very superficial information about the learners for whom we are designing instructional materials. So, I feel that the theory of learner analysis falls far short of its reality and importance in practice. Perhaps I feel this way based on my formative years as a public school teacher. It was only after spending many months with my students for five or six hours per day did I begin to feel as if I really knew them. So, I tend to think we know very little about the people we are designing instruction for despite following the advice and procedures in our canons and even though we might feel comfortable in thinking that we do.

Doing a Better Job at Teaching Learning Analysis


We also don't teach learner analysis very well, at least, I don't think I do. I've long been trying to come up some innovative activities to help my instructional design students recognize both the importance of learner analysis and the difficulty in doing it well. At the very least, I want to instill in them the idea that we should be very cautious and skeptical about thinking that we really understand who our learners are, what they know, and what they want. So, I've been trying to design some class activities over the past few years that get at some of these deeper principles. I've yet to succeed. But a few years ago I came up with a class activity called "Are You Like Lloyd?" that seemed to hold some promise. I asked a series of questions to see how many students in the class had similar life experiences to me. For example, one question was "Do you have at least three siblings?" I thought this was a good question because coming from a family of five, I feel certain that growing up with a few brothers and sisters will make a big difference in how you see the world. After asking about 10 questions like this, a nice discussion usually ensued. I always had the sense that I was on the verge of designing an interesting game based on this activity, but I could never quite figure out what the game's goal would be. Is it to ask the least amount of questions before I demonstrated that everyone in the room was different than me in some way? Heck, that would be easy for me just with the question "Do you play an accordion?" Maybe the goal should be to ask questions to show how much people shared. That seemed easy to do too and not very interesting (e.g. Do you like music? Do you like ice cream?) In the end, just asking some questions that I thought reflected important influences on how I saw the world to find out how many in the room were like or different than me was interesting.

Analyzing a Q Sort: Person to Group Comparison


The "Are You Like Lloyd?" activity is the inspiration for my current work with using Q sorts as instructional activity. I've been playing around with the idea of comparing each person in the group to every other person in the group in terms of their individual Q sort responses. The best way to explain this is with an example. Let's consider a very short and simple Q sort that has only four statements. The Q sort board ranges from least to most favorite with column values of -1, 0, and +2 with two slots in the 0 column:



And, to keep the example as concrete as possible, let's imagine these ice cream flavors are the four statements of the sorting activity:
  1. Chocolate
  2. Vanilla
  3. Strawberry
  4. Peach
Let's imagine a group of five people sorted these four statements as follows:


Let's consider Bob's sorting of these flavors. He likes vanilla the most, chocolate the least, and is neutral about strawberry and peach.

The idea of the "Are You Like Me?" activity is to consider how similar Bob is to all of his classmates, then to do the same comparison for the other people, one by one. Let's stick with Bob for now and consider what this analysis looks like if we use Excel to do the comparisons. Here is the entire spreadsheet:



The absolute value of the difference between Bob's scores and those of the other people are shown in orange. Let's compare Bob to Jane. Bob rated chocolate as -1 (his least favorite), but Jane was neutral about chocolate (0), for a difference of 1. We use the absolute value because we are only concerned about the "distance" between the two scores. (The formula for cell B12 is: "=ABS(B$5-B6)".) We don't really care who has the higher or lower score for any particular flavor. Vanilla was the favorite flavor for both Bob and Jane, so their difference was 0. So, when it comes to vanilla, Bob and Jane are exactly alike. Bob and Jane were also exactly alike in their rating of peach, but they difference by 1 on their rating of strawberry. If we sum of the differences between Bob and Jane, as shown in blue, we get a sum of 2. Now, take a look at how different Bob is from Jim, Sarah, and Susan. If you scan the sum of differences you see that Bob and Sarah are exactly the same, whereas he is most dissimilar to Jim and Susan with sums of 4 for his comparisons to them. If we sum up the sum of differences - the "grand sum" shown in green - we get a rough idea of how different Bob is from the group on ice cream flavor preferences.

Automatizing the Difference Analyses Between All Group Members


Great, now what? Well, we just need to do the same analysis for all other members of this group. With a group of five, I'm sure you could take it from here and finish my Excel spreadsheet. But, it would be rather tedious to set it all up. And imagine if you had a group of 10, 20, or 100 people. For this reason, I decided to program LiveCode do the analysis for me. I set it up in such a way that it doesn't matter how many statements there are or how many people are in the group.

To demonstrate, I plugged the hypothetical data above into my LiveCode program, clicked on the "Are You Like Me?" button and here is the output:



The left side gives me an overall comparison summary. A quick click of the "Copy to Clipboard" button on the left allows me to paste the data here:

Participant,Sum of Comparisons
Bob,10
Sarah,10
Jane,12
Jim,16
Susan,16

As noted above, the group is arranged from low to high. Bob and Sarah have the fewest differences with their classmates, and Jim and Susan have the greatest differences.

A click of the other "Copy to Clipboard" button allows me to easily paste the results of all the comparisons below - this is actual text output complete with commas in key spots to allow for each formatting after pasting into Excel with the "Text to Columns" option (ordered low to high within each group):


Participant,Difference (Absolute Value),Note: The comparison person's name is in the first row of each block.

Bob
Sarah,0
Jane,2
Jim,4
Susan,4
SUM,10

Jane
Bob,2
Sarah,2
Jim,4
Susan,4
SUM,12

Jim
Bob,4
Jane,4
Sarah,4
Susan,4
SUM,16

Sarah
Bob,0
Jane,2
Jim,4
Susan,4
SUM,10

Susan
Bob,4
Jane,4
Jim,4
Sarah,4
SUM,16


Now, obviously, this dummy data doesn't yield very interesting results, but I hope you get the idea.

What Does It All Mean?


Is it better to be similar to everyone else, or different from everyone else? Neither. I think it's important not to impose any value-laden interpretation on the results. That is, it is not good or bad to be similar or different, but only to recognize that there are similarities and differences and to explore why. My hope is that doing this analysis right after a group completes a Q sort will stimulate some lively discussion with a slightly better understanding about each other. And, if these are people studying to become instructional designers, maybe it will give them a deeper understanding and appreciation about learner analysis.

Here's an interesting anecdote based on some early trials. I've tried this activity out with one of my doctoral classes in the fall 2015. The course was required by our majors, but it was open to nonmajors too. We only had one nonmajor take the course. He brought it to my attention in a hallway conversation during a break that he noticed he tended to be the person at the bottom of each comparison group. We both found it interesting that the sorting activities "revealed" him to have a different view or perspective about the statements being sorted than the others in the class.

There may be times that it is useful to group people who have similar or different points of view. Perhaps a certain project would benefit from having people who shared a similar point of view. Or, conversely, perhaps you wanted to maximize the diversity of ideas within a team of people as they worked on a project. It's important to remember that the original purpose of a Q sort activity is to identify a small number of profiles or categories of the people who complete the activity. That involves a lengthy and sophisticated analysis (that I like to refer to as an upside-down factor analysis) using a special statistical software program. Yet, comparing everyone's answers to each other person in the group as shown above seems to be the start to an alternative process for achieving an analysis with a similar goal. I think I need to do some further follow-up computations - yet invented - to tease out subgroups or profiles.

Final Thoughts


As I end this blog post, I want to reiterate that this analysis will work on any survey involving quantitative data, such as surveys based on the more familiar Likert scale. So, even if you are not interested in Q sorts, you might find the ideas behind this analysis intriguing and useful if you are someone who wants to know more about how a group of people tick.

Also, I really don't know if any of this work will yield anything particularly useful. In the end, I may simply be generating an overly-complicated way of conducting an icebreaker activity. Yet, there seems something inherently important and useful in it. I look forward to exploring this issue in my Q Sort research. So, we'll see.


Saturday, December 12, 2015

Creating a Custom Tool to Analyze Q Sort Data

December is finally here. It's been a wonderful semester at the University of Georgia. They seem to get better the longer I'm here. Although I haven't written many posts lately, I have been doing quite a bit of work with LiveCode. In particular, I've done much more work on my Q sort tool. In case anyone wants some background, here are links to my previous three posts about my Q sort tool:
  1. Creating a Q Sort with LiveCode
  2. Lloyd's Q Sort Project: Importing Data from an Internet File
  3. Latest on My Q Sort Prototype: Enhancing the User Experience and Inventing an Instructional Strategy
In short, a Q sort is a quantitative way to measure subjectivity. I know, that sounds completely contradictory. Think of it as a ranking procedure with a few twists. It's a procedure that's been around since the 1930s. One of my recent accomplishments is creating a much improved Q sort tool. The design improvements have been significant enough to warrant a full step in the version number - I'm now up to version 3.1. However, this post is not about that. Instead, it's about creating a completely new tool that focuses on analyzing the data that results from the Q sort activity.

As I've explained in previous posts, I've been trying to come up with an instructional strategy using a Q sort as the main class activity. In my early field trials, I really struggled to provide the class participants with the sorting results as quickly as possible. The reason is that I feel that the Q sort activity promotes a very active, "minds on" experience for participants. In order to take full advantage of their thinking and engagement, I need to compile, analyze, and report to the class participants the results of the Q sort activity as soon after they complete it as possible. In this post, I'll explain why that was such a challenge and what I've built to make this task much, much easier.

The Q Sort Raw Data


When a person completes a Q sort activity, a data string is created and uploaded to a text file stored on the Internet. For example, here is a sample based on a recent Q sort activity completed by an undergraduate class on the topic of favorite vacation destinations using the prompt "Sort these vacation destinations from most to least favorite." Here is the list of vacation destinations they sorted:
Aspen, Colorado
Branson, Missouri
Cancun, Mexico
Hawaii
Home
London
Miami
Myrtle Beach
New York City
Orlando
Paris
Pittsburgh
Rome
San Francisco
I found a web site that listed 10 of the top vacation spots in the United States and I added the rest. (Yes, adding Pittsburgh was rather mischevious of me. But hey, I think it's a fantastic place to spend one's vacation. I also added "home," which I thought was rather insightful.) A lot of the students never heard of Branson, Missouri, which might be a good thing.

Each line of data is an individual's data separated by commas. Here are a few lines of the data to illustrate:

 vacation-abd,Tue, 1 Dec 2015 12:56:26 -0500,Favorite Vacation Destinations?,PERSON1,0,-2,+1,+3,0,+1,0,-1,-1,-3,+2,-1,+1,0,Summary Statement Results,2,Time (seconds),59  
 vacation-abd,Tue, 1 Dec 2015 12:56:31 -0500,Favorite Vacation Destinations?,PERSON2,+1,-1,0,+3,-3,+1,0,-1,0,-2,0,-1,+2,+1,Summary Statement Results,+3,Time (seconds),78  
 vacation-abd,Tue, 1 Dec 2015 12:57:51 -0500,Favorite Vacation Destinations?,PERSON3,+3,-3,+1,+2,0,+1,+1,-1,-1,-1,0,-2,0,0,Summary Statement Results,+3,Time (seconds),131  
 vacation-abd,Tue, 1 Dec 2015 12:57:51 -0500,Favorite Vacation Destinations?,PERSON4,0,-3,-1,+1,0,+3,0,-1,0,+1,+2,-2,+1,-1,Summary Statement Results,+3,Time (seconds),159  

Each line starts with the unique Q sort code, then a date/time stamp, then the name of the Q sort, then the name of the person (I obviously substituted PERSON for each name). This is all followed by the sorting data where each statement's rating is provided. I also ask participants to rate a "summary statement," something I explain briefly below, so those results are next. Finally, I collected the time it took the person to complete the Q sort (in seconds).

OK, great, I have collected data. How does one quickly make sense of these data quickly and accurately in order to trigger some class discussion?

My First Idea: Use Excel to Analyze the Data


First, it is important to recognize that the analysis I wanted to perform was very simple in comparison to a true Q sort analysis. An actual Q sort analysis is best thought of as an upside-down factor analysis, meaning that instead of reducing the number of measures (i.e. statements) down to a smaller number of common factors, the idea is to reduce the number of people into a smaller number of profiles comprising those people. This type of analysis can take weeks and requires a sophisticated statistical package to pull it off. No, what I wanted was a straightforward analysis that I could do quickly, with results that the students and I could understand and react to. The goal of the analysis is just to trigger discussion and more reflection. That is, this is a learning goal, not a research goal. So far, I've settled on computing the sum and standard deviation for each statement based on all of the participant scores. The sum gives an overall sense of the importance of each statement for this group of people in comparison to the others. The standard deviation gives a quick sense of how much the group "agrees" with the ranking of that statement.

My basic plan is for participants to have a short small-group discussion in class - about 10 minutes - about their Q sort results immediately after completing it while I quickly analyze the data and prepare some slides of the overall results. That's a tall order in 10 minutes. And yes, my first idea was to use Excel to analyze the data. This is actually a good idea, if there was more time. I was able to successfully create an excel file on the fly that would analyze the data, but it was a challenge to do so quickly without making any mistakes. It's hard to explain what it feels like to be in the "heat of the moment" when teaching, but it can be stressful to try to focus on a task such as this requiring attention to detail. Such an Excel file begins with the following:


It's easy to get the data into neat and tidy columns using the "Text to Columns" option in Excel. This option allows you to split a line of data into columns based on some delimiter, such as the handy dandy comma. As you can see, I deleted a bunch of the columns to just focus on the people and their statement ratings. I summed the results for each statement. I also computed the standard deviation for each statement (again, as a rough measure of "agreement"). This is all well and good, but I really need the data in this form:


Fortunately, Excel has a transpose option. You first copy the data cells, then you choose "Paste special." Transpose will be one of the options. I then added a column and pasted in the statement labels. Then I sorted the rows in order of sum (largest to smallest). I did the same thing for the standard deviation, though I sorted that data from smallest to largest so that the statements the participants were "most agreement with" are at the top.

So, sure, I could do this in Excel, but allow me to repeat: Doing this in about 10 minutes without making a mistake is quite a challenge. And, if I wanted to do several Q sorts in a single class session, I would really be struggling. I'm good, but I'm not that good. I played around with the idea of creating an Excel template to facilitate the process, but that didn't work so well. What I wanted was a one-click solution.

Creating a One-Click Q Sort Analysis Tool with LiveCode


To meet these challenges, I created yet another LiveCode project that takes the Q sort data and produces the results quickly as described above. Here's the main screen (which uses all of the undergraduate's responses):


All I need to do is paste in the raw data into the large field on the left and the statements on the right. The bottom two left fields give me to double-check the data to be sure everything is ready for the analysis. For example, if all is good, a person's ratings will sum to 0. (I have noticed that data is sometimes missing, such as one of a person's statement ratings. Other quirky errors in the data have happened too, such as a person's data being duplicated. I'm not really sure why this is happening. Fortunately, it's a relatively rare occurrence, and I can usually make the needed corrections. Still, it's a cause of some concern.) I also drew that long arrow from the right side of the screen to the left to remind me to check that the raw data is being parsed correctly. I built in an auto-refresh feature so that after the mouse leaves each field after pasting, the key data fields update automatically.

If all is good, I simply click the red button "Analyze" and voilà! Here is the output:


The data are all nicely computed and sorted just the way I want.

If I now click on the button "Copy All to Clipboard" I can then paste the data into Excel and use the "Text to Columns" option to produce the following formatted spreadsheet:


This is more than adequate, but a few more edits within Excel will produce the following output:


I usually just project this final spreadsheet on the large screen for students to review. But, I could easily copy and paste this into a PowerPoint slide as well.

The main point to all of this is that I can go from raw data to PowerPoint in about 2 minutes without needing any intense concentration to do so. While this isn't exactly a one-click solution, it's pretty close!

I should explain the "Summary Statement Average" of 2.41. After participants complete the Q sort, I ask them to rate one more question. In this case, the statement was "It is important to go on a real vacation at least once a year." I used the same rating scale as that used in the Q sort, which in this case ranged from -3 to +3 (a seven point scale). A summary rating of 2.41 clearly indicates that this group believes in the idea of yearly vacations. This summary statement gives me a Likert-like item that I'm considering using as a kind of weight to modify the raw data. These results are shown above under the orange headings ("adjusted"). I have some research questions related to this. Perhaps I'll explain more about that later.

Are You Like Me?


You might have noticed another button just below the Analyze button titled "Are You Like Me?" This is rather interesting and I will explain it further in a follow-up blog posting. Basically, this button will produce an analysis where each person is compared to the rest of the group. I think this has much potential for both instruction and research. It leads to some very interesting results that really seem to grab the participants' attention and interest.

Final Thoughts


I really wasn't wanting to create this Q Sort Analysis Tool, but it was needed and I'm obviously now glad it's done. It makes the analysis process extremely easy for me. And, even though I compute some simple statistics (i.e. sum and standard deviation), it would be very easy to add other statistics as well, should they prove necessary.

So, how did these undergraduates sort this list of favorite vacation destinations? As you probably already noted, Hawaii topped the list with very little disagreement among the group. Somewhat surprisingly, London was next (I would have bet on Paris). I find it very interesting that "Home" garnered the most disagreement and this result could be used to initiate some interesting discussion. Alas, Pittsburgh was second to last and there seemed to be little disagreement about this among the students. I'll feel sorry for them the next time I'm enjoying a delicious fish sandwich with pierogies on the side at Cupka's Cafe on Pittsburgh's southside. But, Pittsburgh did rank higher than Branson, Missouri. Perhaps "Thank God for Branson, MO!" should become Pittsburgh's new motto.











Friday, November 6, 2015

Report on My LiveCode Workshop at AECT 2015 in Indianapolis

Yes, I'm currently in Indianapolis attending the annual conference for the Association for Educational Communications and Technology (AECT). I conducted a LiveCode workshop at the conference on Wednesday, making this the fourth year in a row I've done so. Although AECT had inadvertently dropped my workshop from the conference promotions, all turned out well as I had 16 people who registered. The workshop went well, thanks mainly to the great attitudes of this friendly group of people. I also must thank my graduate assistant, Tong Li, for assisting me. Tong did so solely out of his interest in coding and the chance to meet and work with a group of like-minded people.

I used one of my new "LiveCode First Projects" that I first created and tried out this past summer when I was teaching in our online design studio at UGA. Three hours goes by quickly, but we were able to do the "Visit the USA!" and "Mad Libs" projects. These, and others, can be found on my LiveCode Workshop site. (Yeah, it is about time I update this site to make it at least a little less ugly.)

I also created a Google presentation for the workshop that is definitely still in the "construction" process. This is not meant to be a stand-alone resource, so if you didn't attend the workshop, don't expect it to guide you in some special way. But, if you did attend the workshop, it should help to guide to various projects and resources covered during the workshop. I definitely recommend that everyone go and get Stephen Goldberg's free PDF "LiveCode Lite: Computer Programming Made Ridiculously Simple." I would nominate this as a top contender for the "Missing LiveCode Manual."

There is a link to a variety of videos in the presentation. Here is one that I showed during the workshop that I think does a great job of providing some inspiration and rationale for the need to learn some coding:


It's definitely geared, I think, to a high school audience, but I think it also works well for any adult audience.

I'll be updating the Google presentation quite a bit in the weeks and months to come, especially because I'm scheduled to teach three LiveCode workshops for the following groups in the coming months:

  1. Conference on Higher Education Pedagogy (CHEP) at Virginia Tech on February 9, 2015 - here's a direct link to my workshop page
  2. OLLI at UGA - This will actually be a "mini-course" consisting of four sessions beginning on February 23, 2015.
  3. UGA's Center for Teaching and Learning Speaker Series - I'll be doing a very short workshop, really just a short overview, of LiveCode probably sometime in March (they are currently finalizing the spring schedule).
So, I extend my thanks to all the people who attended my workshop and to AECT for again giving me the chance to do it.

Back to the conference... 

Tuesday, October 6, 2015

Lloyd's Word Cloud: What a Difference "Repeat for Each" Can Make

As mentioned in my previous posting, I decided to update the first part of my word cloud analysis - the part that goes through a passage of text and computes a list of the unique words and their frequency of use - with the "repeat for each" method. I am very happy that I did.

I used a speech given by former President Jimmy Carter given on July 15, 1979 titled the "Crisis of Confidence", and often referred to as the "Malaise Speech." This passage of text has 3301 words in it. (Click here for a transcript of the speech and click here to watch a video of it.) I've had Jimmy Carter on mind because of the recent news of his serious health condition. He's also from Georgia. And, I've always really liked him and respected him as a human being. Anyhow, I thought this speech would make for another good example of a text passage to run through my word cloud app. Plus, it's fairly lengthy.

The time needed by LiveCode to go through the passage to find the unique words followed by computing the frequencies of each the "repeat for each" method was 26.52 seconds. In comparison, my original code using the "repeat with" method took 1604 seconds, or 26.7 minutes, or 60 times as long. To use a technical term, "Wow!"

Here's the resulting word cloud of the speech using the top words:

[ Get the free LiveCode Community version. ]

So, What's the Deal with "Repeat for Each"?


Good question. First, here's a simple example comparing the "repeat for each" with the "repeat with" approaches.



Here's the code for the "repeat with" button:

 on mouseUp  
   put empty into field "two"  
   put the number of words in field "one" into L  
   repeat with i = 1 to L  
    put word i of field "one"&return after field "two"  
   end repeat  
 end mouseUp  

Here's the code for the "repeat for each" button:

 on mouseUp  
   put empty into field "two"  
   repeat for each word varWord in field "one"  
    put varWord&return after field "two"  
   end repeat  
 end mouseUp  

Here are the key differences. First, this line of code in the top example - word i of field "one" - is equivalent to this line of code in the bottom example - varWord in field "one".  This actually sets the value of the variable varWord to that word. In both cases, varWord and word i of field "one" contain the value of each successive word found in the paragraph on the left during each cycle of the repeating loop.

You might say that these two repeating code structures don't look that different, so why bother? Well, the "repeat with" executes the repeat loop much more slowly because it requires LiveCode to loop down through the field, line by line, until it finds the correct line. I infer from this that the "repeat for each" uses a different indexing system and is able to quickly go to the next line without having to first start at the top and work its way down.

I'm now wondering if I will ever use the "repeat with" method again. However, the "repeat with" method has the advantage of noting the line number for each loop with a local variable (I used "i" in the example above). Now, one can also do that with the "repeat for each" method, but you would have to set up the local variable right before the loop begins, such as put 0 into i, then manually increment this variable at the start of each loop, such as add 1 to i. No big deal, but it is two extra lines of code.

Special Thanks Again to Richard Gaskin


I conclude with yet another shout out to Richard Gaskin who kindly took the time to point me in this direction months ago. OK, so it took awhile to "see the light," but I'm finally there.

Now, I just have to hunker down and spend some time with Ali Lloyd's script for creating a visually sophisticated word cloud to learn how he did it exactly. LiveCode has a great community!






Wednesday, September 30, 2015

Update on Using LiveCode to Build a Word Cloud: A Cloud is Forming!

I spent about two more hours on my little word cloud project. I knew that the next step of building the actual word cloud from the list of unique words and frequencies was definitely within reach. Here's an example of one of the first word clouds I built using the text of Abraham Lincoln's second inaugural address:


There is a "mild" hack at play here. The words simply go to a random spot within this square area. Each separate field containing a word has the script "grab me" on mousedown so that I can easily move the words to a more aesthetically pleasing location. I decided it wasn't worth trying to figure out ways to keeps all of the words from overlapping, etc. However, for an excellent example of how to accomplish this, check out the blog posting by Ali Lloyd on his efforts of building a word cloud. Ali is one of the excellent professionals who work at RunRev (the parent company of LiveCode). Ali's solution is exactly what you think of when you think of a word cloud. It has words of different sizes and colors with different orientations filling every nook and cranny. It's really marvelous. So, many thanks to Ali for sharing this link with me in his comment to my previous blog posting on this topic. I'll be studying his code for some time to come. (And any script that uses sines and cosines makes me want to purr.)

How I Did It


OK, back to my humble attempt. One of the challenging parts to the project was figuring out the step-wise progression of font sizes. The above word cloud looks OK, but I was able to improve the word cloud algorithm in several fundamental ways. All of the code to build the word cloud is in the green button "Build Word Cloud" shown at the bottom of this post, but here are a few key highlights.

Font Size Step-Wise Progression


I perfected the step-wise progression of the font size so that the word with the highest frequency of use had a font size of 96 pixels. In the example above, the font size is directly proportional to the frequency. That was inadequate for many reasons, the most obvious is that the font sizes can be radically different for just for the first few most frequently used words. So, I revised the script so that the second next most frequently used word had a font size of 12 pixels smaller, no matter how fewer times it was used, and so on. Let me explain a little further. If one word was mentioned 100 times, but the next most frequently mentioned word was used only 20 times, then my revised script would give the second word a font size of only 12 pixels smaller. Think of 12 pixels as the height of the "step." From a data visualization standpoint, that skews the proportion in an inappropriate way, but it makes for a more aesthetically pleasing outcome. So, I think it all depends on what the purpose of the word cloud is. For scientific purposes, it is inadequate because it skews the output, but for a quick visual to get the gist of what's going on in a passage of text that is pleasing to the eye, it's fine. I also made 12 pixels the smallest font size that would be used. (In my original script, it was possible to have one word be 96 pixels and all remaining words 12 pixels if the most frequently used word was mentioned an inordinate amount times as compared to all other words.)

Adding Color


I also added the option to pick a color at random using the following code:

    put random (255) into rColor  
    put random (255) into gColor  
    put random (255) into bColor  

    if varColor is true then   
      set the foregroundcolor of it to rColor,gColor,bColor  
    else  
      set the foregroundcolor of it to black       
    end if  

You'll need to scan the code below for these lines in the button script. The first three lines just pick three numbers at random from 0 to 255. These are used to produce a random RGB color if the "Color" option is checked.

These changes produced the following word cloud:


I know my graphic design skills don't qualify me to give any "expert" opinion, but it definitely seems like an improvement to me, at least aesthetically. Again, though, the question of whether you get a more accurate representation of the data is an important question to ask.

Another improvement was the use of the "font step" slider at the top of the screen. When I ran the algorithm with different text passages, I found that 12 was not always the optimal number for the font step. I decided it was better to let the user experiment with this. A final minor improvement, I think, is making sure that the two words with the highest frequency are always shown in black for added emphasis. Here's a screen shot of the card that builds the word cloud.



I Found Two Golden Nuggets: formattedWidth and formattedHeight


One of the wonderful outcomes of building this project is discovering the "formattedWidth" property. This property does all of the hard work of figuring out exactly how wide or how tall a text field needs to be for its contents to fit perfectly within it. I didn't know about this property when I was first building my Q Sort project, so I came up with my own function- very imperfect - to try to do the same thing. I've since updated my Q Sort app to use this property. Here are the two key lines of code that accomplish this feat:

    set the width of field "word object" to the formattedWidth of field "word object"  
    set the height of field "word object" to the formattedHeight of field "word object"  

Next Steps


As I looked at Ali Lloyd's code, it reminded me of Richard Gaskin's advice to me almost a year ago to use the "repeat for each" form of going through a list of data rather than the "repeat with" approach that I have become so fond of. My code that computes the frequency of the words in a passage of text is painfully slow, so I'm now very motivated to get with it and try out the "repeat with" approach. So, look for at least one more update on this little project.

As always, the bottom-line for me is that I continue to learn new things every time I build a LiveCode project, however small. But, is there really any other way?

Script on the Button "Build Word Cloud":


 on mouseUp  
   //Erase any existing word cloud first  
   put the number of fields into L  
   repeat with i = 5 to L-1  
    put i-4 into j  
    put item 1 of line j of field "word frequencies" into varFieldName  
    put varFieldName into message  
    delete field varFieldName  
   end repeat  
   put false into varColor  
   if the hilite of button "color" is true then put true into varColor  
   put the thumbposition of scrollbar "fontdifferencebar" into varFontChangeAmount  
   //Build the word cloud  
   set the movespeed to 0  
   //Determine the largest frequency - this will get the largest font size in the word cloud  
   put item 2 of line 1 of field "word frequencies" into varMaxFrequency  
   put 0 into varFontDifference  
   put field "minimum frequency" into varMinFrequency  
   //This is the repeat loop that will create each word, resize its text size, then move it  
   repeat with i = 1 to the number of lines in field "word frequencies"  
    put random (255) into rColor  
    put random (255) into gColor  
    put random (255) into bColor  
    if item 2 of line i of field "word frequencies" < varMaxFrequency then  
      add 1 to varFontDifference  
      put item 2 of line i of field "word frequencies" into varMaxFrequency  
    end if  
    if item 2 of line i of field "word frequencies" < varMinFrequency then exit repeat  
    //The next two lines determine the area of the screen where word cloud will be built  
    put random(300)+100 into x  
    put random(300)+100 into y  
    //Create the next word for the word cloud  
    copy field "word object" on card "library" to this card  
    hide it  
    if varColor is true then   
      set the foregroundcolor of it to rColor,gColor,bColor  
    else  
      set the foregroundcolor of it to black       
    end if  
    if varFontDifference<2 then set the foregroundcolor of it to black  
    put item 1 of line i of field "word frequencies" into field "word object"  
    //Determine font height for the word  
    //put varMaxFrequency - item 2 of line i of field "word frequencies" into varFontDifference  
    put 96-(varFontDifference*varFontChangeAmount) into varTextSize  
    if varTextSize < 12 then put 12 into varTextSize  
    set the textSize of field "word object" to varTextSize  
    set the width of field "word object" to the formattedWidth of field "word object"  
    set the height of field "word object" to the formattedHeight of field "word object"  
    //Rename the newly copied field as the word it contains  
    set name of field "word object" to item 1 of line i of field "word frequencies"  
    //Move the word to a random spot within the word cloud screen area  
    move it to x,y in 1 millisecond  
    show it  
   end repeat  
 end mouseUp  

Postscript: About My Formatted Code


Ali Lloyd's post reminded me that the way I've been showing code in my blog posts has been really terrible, so I did the obvious thing and googled "showing code in a blog post using blogger" and quickly found a great tool:

http://codeformatter.blogspot.com/

The much improved formatting above is the result.


Sunday, September 13, 2015

Using LiveCode to Create a Word Cloud (Almost) with a Nod Toward Data Mining

OK, it was 5:30 p.m. on Friday afternoon and I was completely spent mentally from a three-hour faculty meeting. I wanted to end the work week on an upbeat note, so I decided to take a few minutes to work with LiveCode. I've spent all of my free time this week working on my Q sort tool, so I wanted to do something brand-new. So, I started a new LiveCode project that had recently been on my mind. On Thursday I had attended an excellent presentation about data mining by one of our very talented doctoral students - Neo Hao - at the Design, Development and Research Conference hosted at the University of Georgia (and chaired by Dr. Rob Branch). Neo's presentation made me think about how easy it would be to build a simple example of a data mining program: a word cloud using the frequency of words in a given passage of text. This post is only the result of about 30 minutes of work. I didn't finish the project, but I was able to build the basics and I think it is an interesting example of using the excellent list processing capabilities of LiveCode. (Interestingly, I worked at least another hour on the program after I started writing this blog on aesthetic stuff, like fonts, labels, and some user feedback in order to make the project "presentable" in a blog posting. That is the way it is always is with software design - the time needed to make a program work is always much less than the time needed to make it usable.)

I'm sure you know what a word cloud is, but if not, here's an example built with wordle.net and based on titles of some of the things I've published over the past few years:



As you can see, a word cloud is simply a listing of all the unique words in the passage with a visual representation of the frequency of the words. The more times the word is used, the bigger its font size. A word cloud is an excellent visual representation of the importance of certain words in a given passage of text, and I think you can get a good, quick, snapshot of what I've been writing about in my published work just by scanning this image. Notice, by the way, the fact that inconsequential words, such as "a" and "an," "the," "and," and the like are not represented. Also notice that punctuation has been stripped out. These become important points for us to consider later on.

Now, don't get your hopes up that I'm going to show how to build a word cloud as elegant as this. In fact, all I've built so far is a little program that takes a passage of text and figures out all of the unique words and computes the number of times they are used. It also strips out all of the words you decide are inconsequential. It also provides some quick summary data, such as the total number of words in the passage of text and the total number of unique words.

Here's a screenshot of the program using Abraham Lincoln's second inaugural address:

[ Get the free LiveCode Community version. ]



How It Works


The card consists of the following main fields displayed left-to-right on the screen:
  • original - this field contains the original passage of text you want to analyze;
  • unique words - this field contains all unique words found in the original passage;
  • word frequencies - this field contains the unique words plus their frequencies
  • ignore - this field (with a reddish background color) lists all words you want to be ignored when identifying unique words.
All of the code is contained within the button "Analyze." The program works with several repeating loops, which I've color coded blue and green:

on mouseUp
   put empty into field "unique words"
   put empty into field "target word"
   put empty into field "word frequencies"
   put empty into field "unique count"

   put empty into field "frequency count"
   put the number of words in field "original" into L
   
   //Search for Unique Words; Remove Punctuation
   put "Finding unique words..." into field "working"  //user feedback
   show field "working"
   repeat with i=1 to L
      put empty into field "target word"
      put word i of field "original" into varTargetWord
      //Strip out any punctuation found in the word
      if the last character of varTargetWord = comma then delete the last character of varTargetWord
      if the last character of varTargetWord = "." then delete the last character of varTargetWord
      if the last character of varTargetWord = ";" then delete the last character of varTargetWord
      if the last character of varTargetWord = "?" then delete the last character of varTargetWord
      if the last character of varTargetWord = "!" then delete the last character of varTargetWord
      if the last character of varTargetWord = ":" then delete the last character of varTargetWord
      if the last character of varTargetWord = quote then delete the last character of varTargetWord
      if the first character of varTargetWord = quote then delete the first character of varTargetWord
      if the first character of varTargetWord = "(" then delete the first character of varTargetWord
      if the last character of varTargetWord = ")" then delete the last character of varTargetWord
      put the number of lines in field "unique words" into LL
      put true into varUniqueWordFound
      
        //Check to see if the word has already been found
      put the number of lines in field "unique words" into LL
      put true into varUniqueWordFound
      repeat with j=1 to LL         
         if varTargetWord = word j of field "unique words" then
            put false into varUniqueWordFound
            exit repeat            
         end if
      end repeat
      if varUniqueWordFound is true then put varTargetWord&return after field "unique words"
   end repeat
   
   //Compute the frequencies of each unique word found
   put "Computing frequencies..." into field "working" //user feedback
   put the number of lines in field "unique words" into L
   repeat with i = 1 to L
      put line i of field "unique words" into varUniqueWord
      //First, look for inconsequential words and strip out
      put the number of lines in field "ignore" into LLL
      put false into varWordToIgnoreFound
      repeat with k= 1 to LLL
         put line k of field "ignore" into varWordToIgnore
         if varUniqueWord=varWordToIgnore then 
            put true into varWordToIgnoreFound
            exit repeat
         end if
      end repeat
      if varWordToIgnoreFound is true then next repeat
      //Ok, begin counting the unique words
      put 0 into varCount
      put the number of words in field "original" into LL
      repeat with j=1 to LL
         put word j of field "original" into varTargetWord         
         //Strip out any punctuation found in the word
         if the last character of varTargetWord = comma then delete the last character of varTargetWord
         if the last character of varTargetWord = "." then delete the last character of varTargetWord
         if the last character of varTargetWord = ";" then delete the last character of varTargetWord
         if the last character of varTargetWord = "?" then delete the last character of varTargetWord
         if the last character of varTargetWord = "!" then delete the last character of varTargetWord
         if the last character of varTargetWord = ":" then delete the last character of varTargetWord
         if the last character of varTargetWord = quote then delete the last character of varTargetWord
         if the first character of varTargetWord = quote then delete the first character of varTargetWord
         if the first character of varTargetWord = "(" then delete the first character of varTargetWord
         if the last character of varTargetWord = ")" then delete the last character of varTargetWord
         if varTargetWord=varUniqueWord then add 1 to varCount
      end repeat
      put varUniqueWord&comma&varCount&return after field "word frequencies"
   end repeat
   
   put "Sorting..." into field "working" //user feedback
   sort lines of field "word frequencies" numeric descending by item 2 of each
   
   put "Done!" into field "working" //user feedback
   wait .5 seconds
   hide field "working"
end mouseUp


The first repeat loop, color-coded light blue, searches for unique words while removing punctuation. It looks at each word in the passage (in the field "original") and checks to see if the word has already been found (in the field "unique"). To do this, a second loop (shown in darker blue) is executed that looks at each word in the field "unique." If it has already been found, the line "exit repeat" is executed and the program stop looking through the unique words already found and goes to the next word in the original passage. It's worth pausing for a moment to study this second loop. As my experience with LiveCode grows, I find interesting patterns in my coding.

The question of "uniqueness" in a given text comes up over and over. Likewise, the strategy that I came up with long ago to identify uniqueness - as shown in this dark blue code - has become a time-tested friend to me. Here's a short explanation:

I begin by creating a true/false variable, in this case I set the variable "varUniqueWordFound" to false. By setting it false at the start, I'm saying that I'm going to assume that the next word I look at is not unique, that is, I expect it to have already been found somewhere previously in the text. In a sense, I'm "daring" the text to prove me wrong. I then do a search of all unique words previously found. Obviously, at the start there are zero words found. Then, as unique words are found, they are added to the text field "unique words." The first loop identifies the words in the original text passage - denoted by the phrase "word i" in the line found near the top of the first loop (in light blue):

put word i of field "original" into varTargetWord"

So, the variable "varTargetWord" contains this ever-changing target word. The second loop compares the target word to each and every unique word already found. If a match is found (at any point) two things immediately happen: 1) I put true into the variable "varUniqueWordFound" because, after all, it is true that a unique word was found; and 2) I exit the current repeat, which in this case is the second loop (in dark blue). So, if a unique word was NOT found, then the variable "varUniqueWordFound" remains false which kicks in the last line of code shown in dark blue: 

if varUniqueWordFound is true then put varTargetWord&return after field "unique words"

That line instructs the program to add the current target word to the field of unique words. The first loop (in light blue) then moves on to the next word in the passage and the operation is repeated.

Calculating Frequency


OK, let's move on. The program now counts up how many times each unique word is used in the original passage. This code is shown in green. As it goes, it also checks to see which words it should ignore as inconsequential, as listed in the field "ignore." (And you need to be careful about what you consider to be an inconsequential word, a point I'll return to at the end of this post.) Every time it finds that word in the original passage, it adds 1 to the variable varCount. As it checks each word in the original passage, it must again strip out any punctuation found with the word. So, I just repeat the code for this operation used above. (Any time you do something more than once, that is an indication that you really should create a custom function, which I really should do if I wasn't so lazy.) After it goes through each and every word in the original passage, it puts the unique word and the frequency in the third field "word frequencies." This step takes the most time for the program to execute.

Finally, I sort the field "word frequencies" in order of how often the word appears from most to least (this code is shown in red). However, I added some buttons that give the user the option of sorting the text also alphabetically.

Along the way, I add some feedback to let the user know how things are progressing.

Next Steps


The program above generates a list of all unique words in any given passage of text and the frequency that they appear. This is the raw material needed to create a word cloud. Although I'm not sure if I'll have time (or inclination) to work further on this project, here are some ideas on what needs to be done next.

Each unique word needs to be put into an object, such as a button or a field. The text size for that object would depend on the frequency of the word. There would have to be a given maximum font size, let's say 24 pt. Whatever is the most frequent word or words would be given that maximum font size. On the other side of the spectrum, all words with a frequency of 1 would be given the minimum font size, which we'll say is 10. Or, one might decide that only words that have been mentioned more than once or twice get displayed in the word cloud. (Lincoln only mentioned the word "Bible" once, but I think its one mention is very significant.) Either way, the smallest font size would be determined for a group of words. Then, all remaining words would be given one of the other levels of available font size, depending on their frequency. So, some algorithm would need to be devised that divides up all remaining words along this font size spectrum, which would not be hard to do.

The tricky part would be to arrange these objects in some sort of pleasant visual display. Frankly, I'm not sure how I would do that!

So, feel free to download my code to this project and give it a try. (And don't forget to send me a copy.)

So, What's the Connection to Data Mining?


This little Friday afternoon project is a simple example of data mining in that it is analyzes each and every word in a given passage and reports some statistics on that passage in a way that I, as a mere human, could not do effectively on my own. It's easy to imagine more sophisticated things you might investigate. Perhaps you are interested in the presence of certain words or combinations of words in a passage of text. For example, I find it very interesting that Abraham Lincoln only used the word "I" once in his second inaugural. (This fact makes it clear that it would be unwise to assume that all small words are inconsequential.) A teacher may find it useful and revealing to analyze essays on a specific topic submitted by all students in a class. You might want to see if some key words are mentioned, where they are in the essay, or how far apart they are in the essay. This is still not a substitute for actually reading the essays, but it's easy for me to see how analyzing even this small mountain of text with cleverly written algorithms would aid the teacher's overall assessment of the class's writing. But, what I think is the most important characteristic of data mining is that the computer will happily do this analyzing for thousands upon thousands of text passages quickly and errorlessly. It will reveal patterns that may subsequently uncover meaning if the algorithm is appropriate and the person interpreting the data is astute.

And, as Neo explained in his talk on Thursday, because of public APIs we all have access to mountains of data from our twitter or facebook feeds. The companies themselves have access to all of these data and you can be sure those companies are mining their data with extraordinary precision. Is their intent noble or malevolent? I really don't know, but it's easy to speculate that it is somewhere in the middle. I find it uplifting that Google could track the spread of flu in almost real-time from the search data of people who were feeling ill, and not in weeks as is needed by the Centers for Disease Control (see the book Big Data by Mayer-Schonberger and Cukier). I think it is time for rank-and-file educators to be part of this conversation.

Appendix: Frequency of Unique Words in Abraham Lincoln's Second Inaugural Address



Sorted by Frequency Sorted Alphabetically
war,11 absorbs,1
all,10 accept,1
we,6 achieve,1
but,5 address,2
God,5 against,1
His,5 agents,1
shall,5 ago,2
than,4 aid,1
years,4 all,10
Union,4 Almighty,1
Both,4 already,1
let,4 altogether,2
do,4 always,1
were,3 American,1
would,3 among,1
other,3 another,1
interest,3 answered,2
right,3 anticipated,1
Neither,3 anxiously,1
has,3 any,2
may,3 appearing,1
us,3 appointed,1
Woe,3 arms,1
offenses,3 ascribe,1
must,3 ask,1
those,3 assistance,1
there,2 astounding,1
less,2 attained,1
occasion,2 attention,1
address,2 attributes,1
Now,2 avert,1
four,2 away,1
public,2 battle,1
have,2 because,1
been,2 been,2
every,2 before,1
still,2 being,1
nation,2 believers,1
could,2 Bible,1
hope,2 bind,1
no,2 blood,1
ago,2 bondsman's,1
While,2 borne,1
altogether,2 Both,4
without,2 bread,1
one,2 but,5
rather,2 called,1
came,2 came,2
slaves,2 care,1
cause,2 cause,2
even,2 cease,2
conflict,2 charity,1
cease,2 cherish,1
should,2 chiefly,1
Each,2 city,1
same,2 civil,1
pray,2 claimed,1
any,2 colored,1
just,2 come,2
answered,2 cometh,1
needs,2 conflict,2
come,2 constantly,1
whom,2 constituted,1
offense,2 contest,1
If,2 continue,1
He,2 continued,1
wills,2 corresponding,1
gives,2 could,2
Him,2 course,1
until,2 dare,1
drawn,2 declarations,1
said,2 delivered,1
second,1 departure,1
appearing,1 depends,1
take,1 deprecated,1
oath,1 destroy,1
Presidential,1 detail,1
office,1 devoted,1
extended,1 directed,1
first,1 discern,1
statement,1 dissolve,1
somewhat,1 distributed,1
detail,1 divide,1
course,1 divine,1
pursued,1 do,4
seemed,1 drawn,2
fitting,1 dreaded,1
proper,1 drop,1
expiration,1 due,1
during,1 duration,1
declarations,1 during,1
constantly,1 Each,2
called,1 easier,1
forth,1 effects,1
point,1 else,1
phase,1 encouraging,1
great,1 energies,1
contest,1 engrosses,1
absorbs,1 enlargement,1
attention,1 even,2
engrosses,1 every,2
energies,1 expected,1
little,1 expiration,1
new,1 extend,1
presented,1 extended,1
progress,1 faces,1
our,1 fervently,1
arms,1 fifty,1
upon,1 finish,1
else,1 firmness,1
chiefly,1 first,1
depends,1 fitting,1
well,1 Fondly,1
known,1 forth,1
myself,1 four,2
I,1 fully,1
trust,1 fundamental,1
reasonably,1 future,1
satisfactory,1 generally,1
encouraging,1 gives,2
high,1 God,5
future,1 God's,1
prediction,1 Government,1
regard,1 great,1
ventured,1 has,3
corresponding,1 have,2
thoughts,1 having,1
anxiously,1 He,2
directed,1 high,1
impending,1 Him,2
civil,1 His,5
dreaded,1 hope,2
sought,1 hundred,1
avert,1 I,1
inaugural,1 If,2
being,1 impending,1
delivered,1 inaugural,1
place,1 insurgent,1
devoted,1 insurgents,1
saving,1 interest,3
insurgent,1 invokes,1
agents,1 itself,1
city,1 judge,1
seeking,1 judged,1
destroy,1 judgements,1
war—seeking,1 just,2
dissolve,1 knew,1
divide,1 known,1
effects,1 lash,1
negotiation,1 lasting,1
parties,1 less,2
deprecated,1 let,4
them,1 little,1
make,1 living,1
survive,1 localized,1
accept,1 looked,1
perish,1 Lord,1
One-eighth,1 magnitude,1
whole,1 make,1
population,1 malice,1
colored,1 man,1
distributed,1 may,3
generally,1 men,1
over,1 men's,1
localized,1 might,1
southern,1 mighty,1
part,1 more,1
constituted,1 must,3
peculiar,1 myself,1
powerful,1 nation,2
knew,1 nation's,1
somehow,1 nations,1
strengthen,1 needs,2
perpetuate,1 negotiation,1
extend,1 Neither,3
object,1 new,1
insurgents,1 no,2
rend,1 none,1
Government,1 North,1
claimed,1 Now,2
more,1 oath,1
restrict,1 object,1
territorial,1 occasion,2
enlargement,1 offense,2
party,1 offenses,3
expected,1 office,1
magnitude,1 one,2
duration,1 One-eighth,1
already,1 orphan,1
attained,1 other,3
anticipated,1 our,1
might,1 ourselves,1
before,1 over,1
itself,1 own,1
looked,1 paid,1
easier,1 part,1
triumph,1 parties,1
result,1 party,1
fundamental,1 pass,1
astounding,1 peace,1
read,1 peculiar,1
Bible,1 perish,1
invokes,1 perpetuate,1
aid,1 phase,1
against,1 piled,1
seem,1 place,1
strange,1 point,1
men,1 population,1
dare,1 powerful,1
ask,1 pray,2
God's,1 prayers,1
assistance,1 prediction,1
wringing,1 presented,1
bread,1 Presidential,1
sweat,1 progress,1
men's,1 proper,1
faces,1 providence,1
judge,1 public,2
judged,1 purposes,1
prayers,1 pursued,1
fully,1 rather,2
Almighty,1 read,1
own,1 reasonably,1
purposes,1 regard,1
unto,1 remove,1
world,1 rend,1
because,1 restrict,1
man,1 result,1
cometh,1 right,3
suppose,1 righteous,1
American,1 said,2
slavery,1 same,2
providence,1 satisfactory,1
having,1 saving,1
continued,1 scourge,1
through,1 second,1
appointed,1 see,1
time,1 seeking,1
remove,1 seem,1
North,1 seemed,1
South,1 shall,5
terrible,1 should,2
due,1 slavery,1
discern,1 slaves,2
therein,1 somehow,1
departure,1 somewhat,1
divine,1 sought,1
attributes,1 South,1
believers,1 southern,1
living,1 speedily,1
always,1 statement,1
ascribe,1 still,2
Fondly,1 strange,1
fervently,1 strengthen,1
mighty,1 strive,1
scourge,1 sunk,1
speedily,1 suppose,1
pass,1 survive,1
away,1 sweat,1
continue,1 sword,1
wealth,1 take,1
piled,1 terrible,1
bondsman's,1 territorial,1
two,1 than,4
hundred,1 them,1
fifty,1 there,2
unrequited,1 therein,1
toil,1 those,3
sunk,1 thoughts,1
drop,1 thousand,1
blood,1 three,1
lash,1 through,1
paid,1 time,1
another,1 toil,1
sword,1 toward,1
three,1 triumph,1
thousand,1 true,1
judgements,1 trust,1
Lord,1 two,1
true,1 Union,4
righteous,1 unrequited,1
malice,1 until,2
toward,1 unto,1
none,1 up,1
charity,1 upon,1
firmness,1 us,3
see,1 ventured,1
strive,1 war,11
finish,1 war—seeking,1
work,1 we,6
bind,1 wealth,1
up,1 well,1
nation's,1 were,3
wounds,1 While,2
care,1 who,1
who,1 whole,1
borne,1 whom,2
battle,1 widow,1
widow,1 wills,2
orphan,1 without,2
achieve,1 Woe,3
cherish,1 work,1
lasting,1 world,1
peace,1 would,3
among,1 wounds,1
ourselves,1 wringing,1
nations,1 years,4