Visualizing Neo-Assyrian Scholars in Python

Just as I was writing this post an author on realpython.com released an excellent overview of using Matplotlib for Python plotting, a highly recommended read!

(Skip to the historical context below)

Letter from Issar-šumu-ereš to the king
Cuneiform letter from Issar-šumu-ereš to the king (http://oracc.museum.upenn.edu/saao/saa08/P336444/html)

Introduction:

In the past couple of years I’ve been trying to meld my long history of casually programming Python with my scholarly work on scribes and knowledge from the cuneiform world.  This has translated more recently into a couple of papers at conferences and a forthcoming chapter in a book.  However, as most scholars would probably relate, much of the programming “research” happens at the very last-minute; either in the closing days of an abstract deadline or in the weeks leading up to a conference.  I decided I wanted to spend a little bit of time thinking about how I gather and process data, so that next time I’m crunched between a deadline and results I can lean on some of the techniques I’ve worked to perfect.

To that end, I picked up two of the catalogs of texts from the open-access ORACC project State Archives of Assyria online.  I chose catalogs from volume 8, as it’s a personal favorite of mine (Astrological Reports to Assyrian Kings by Herman Hunger), and volume 10 (Letters from Assyrian and Babylonian Scholars by Simo Parpola) because it also includes letters from scholars.  These scholars were principally writing to the king to address questions and concerns he might have had about the fate of himself as well as of the land and country. The scholars used a variety of methods to answer his queries, often referencing handbooks of important omens. We can use these catalogs to investigate who was writing the king when and try to place these patterns in their historical contexts.

The catalog file is spit out by the ORACC servers in a JSON format and it’s relatively easy to work with:


{
  "type": "catalogue",
  "project": "saao/saa08",
  "source": "http://oracc.org/saao/saa08",
  "license": "This data is released under the CC0 license",
  "license-url": "https://creativecommons.org/publicdomain/zero/1.0/",
  "more-info": "http://oracc.org/doc/opendata/",
  "UTC-timestamp": "2017-06-21T23:31:15",
  "members": {
    "P236880": {
      "project": "saao/saa08",
      "ancient_author": "Ašaredu the Older",
      "astron_date": "a-668-03-16",
...
    "P236976": {
      "project": "saao/saa08",
      "ancient_author": "Nabu-šuma-iškun",
      "astron_date": "a-672-11-15",

Each text is contained within the “members” property, and identified by its “P-number”. From there we can access the various properties relevant to our analysis.

Processing with Python:

The code below was all written in a Python Jupyter-notebook (incidentally I was trying out the new JupyterLab while writing this and quite liked it). These notebooks make it easy to iterate through a process of data analysis and visualization and describe the steps along the way. You can view a version of the code below here.

We begin with some standard boilerplate and imports.  We need my graphs to appear inline so we can see them and change them if necessary.  We also need to import a number of standard Python libraries: json for working with the JSON file, Matplotlib for graphing, and finally two handy tools from the ever-useful collections library.

%matplotlib inline

import json
import matplotlib.pyplot as plt

from collections import Counter, OrderedDict

With the standard library tools we’re going to use loaded, we can move on to actually getting our data out of the JSON file, this is just a quick python three liner per catalog, then we join both catalogs together:

filename = './saa_8_catalogue.json'
with open(filename) as f:
    catalog_data_saa_8 = json.load(f)
filename = './saa_10_catalogue.json'
with open(filename) as f:
    catalog_data_saa_10 = json.load(f)
all_texts = {**catalog_data_saa_10["members"], **catalog_data_saa_8["members"]}

Now, the catalog data is represented in our work environment by a handy Python dictionary.  With that done, the first thing we’re going to attempt is figuring out how many texts each scholar wrote.  We can do that by going through each text and keeping a running tally for each author we encounter.  There are two ways to do this, the first uses a more verbose and old-fashioned for loop:

author_counts = {}
for _, item in data["members"].items():
    author = item["ancient_author"]
    if author in author_counts:
        author_counts[author] += 1
    else:
        author_counts[author] = 1

The second uses the handy Counter class from the collections library, and makes use of a type of syntax called list-comprehension:


author_counts = Counter( for text in catalog_data["members"].values()])

Either way we construct the counts we need to make sure our end result is sorted by author, and we do this by using the other imported class from the collections library, the OrderedDict. This class maintains the order of keys in a dictionary, which up until recently was not guaranteed by Python.

author_counts = OrderedDict(sorted(author_counts.items()))

Before we start modifying the data it would also be nice to know if each author has a favored scholarly genre. We can do that by extracting text from the “subgenre” field for each text and assigning the values to each author and finding the most common value. This code is a bit complex in order to deal with the variety of data found in the catalogs. We also normalize it by assigning every text that comes from SAA 8 a value of “astrologers” since we know it’s coming from an astrologer. With this dictionary finished, we will be able to easily pass it an author and find out what their most common scholarly genre was:

genres = {}
for text in all_texts:
    author = all_texts.get("ancient_author")
    genre = "None"
    try:
        genre = all_texts.get("subgenre").split()[1]
        if all_texts.get("volume") == "SAA 8":
            genre = "astrologers"
    except:
        pass
    if author in genres:
        genres[author].append(genre)
    else:
        genres[author] = [genre]
genres = {author:max(genres[author]) for author in genres}

Next we want to be able to filter our results, there are roughly 100 scholars ascribed authorship in the volume (including “unassigned” and joint-authored texts) but for the purposes of graphing the data we’re only interested in those who wrote more than fifteen texts. Here we define my min and max parameters, we set them up first so that we can change them easily later on if we want to narrow my analysis more by restricting the dataset. Next we filter my existing dictionary of authors and counts by these min and max parameters. I’ve included the for-loop version as well, but in this case I opted to make use of some fancy dictionary comprehension.

min_c, max_c = 0, max(author_counts.values())
min_c = 15
# for author, count in author_counts.items():
#     if min_c < count < max_c:
#         authors.append(author)
#         counts.append(count)
filtered_counts = {author:count for author, count in author_counts.items() if min_c < count < max_c}

With that done we’re actually ready to graph our first result. We’ve got a dictionary where each key is an author who wrote more than fifteen texts, and the value for each author in the dictionary is the number of texts they wrote.
So the next step is to use the Matplotlib library to create a horizontal bar-graph of our data. This code can seem quite opaque, and part of the impetus behind this whole experiment was to try and understanding graphing a bit better. I opted to include a bunch of extra code to make the graph look nicer. I’ve been trying to figure out if there’s a logical order to the code used to construct a graph with Matplotlib. In this case I opted to define the variables that I would need to represent my data in the graph first, then configure general properties of the graphing environment. Next I would make some changes to this graph, in particular, label the graph, finally I would call the actual graphing function, in this case barh, with the variables I defined at the beginning.

counts = filtered_counts
labels = ["{} ({})".format(author, genres[author]) for author in list(counts)[::-1]]
data = list(counts.values())[::-1]
# Attempt to make the plot look better:
plt.figure(figsize=(6, 7)) 
plt.style.use('fivethirtyeight')
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.size'] = 12

ax = plt.subplot(111) # remove borders
ax.spines["top"].set_visible(False)    
ax.spines["bottom"].set_visible(False)    
ax.spines["right"].set_visible(False)    
ax.spines["left"].set_visible(False) #
ax.grid(axis="x", color="black", alpha=0.3, linestyle="--") # grid lines
plt.tick_params(axis="both", which="both", bottom="off", top="off", # remove tick lines
                labelbottom="on", left="off", right="off", labelleft="on") 
plt.title("Text authored by Neo-Assyrian Scholars (n > 15)")
ax.set_xlabel("Numbers of texts in corpus")

# Plot the actual data:
plt.barh(range(len(counts)), data, tick_label=labels) 
plt.show()

Bar chart of Neo-Assyrian astrologers and the number of texts they wrote.

What is quite clear from this graph is that astrology far outweighs any other type of scholarship in the court. This is slightly misleading as we explicitly sampled a volume only dedicated to astrology. But we also know that astrologers were some of the most important advisers to the king, and their interpretation of omens was considered to be a preeminent form of divination (Fincke, 2017, 392).

The great benefit of the scholarly reports to the king is that when they include astronomical observation we can sometimes date the observation itself. There are some caveats to this approach, obviously we have to take the report itself as a true observation, secondly a text could report an observation that occurred before (sometimes well before) the text was written. With all of this in mind, the next graph we’ll attempt to make from the data is a timeline of the same scholars seen above. To start with, the data from the catalog includes two fields “date” and “astron_date” which generally looks like this: "a-668-03-16". We only really care about the year and some fraction thereof, so we roughly normalize the date:

def get_year(text):
    if text is not None:
        try:
            year, month, day = map(int, text.split('-'))
            if year == 0:
                return 0
            date = year + month * 1/12 + day * 1/30 * 1/12
            return date
        except:
            pass
    return 0
def get_astron_year(text):
    if text is not None:
        try:
            _, year, month, day = map(int, text.split('-')[1:])
            if year == 0:
                return 0
            date = year + month * 1/12 + day * 1/30 * 1/12
            return date
        except:
            pass
    return 0

Because not every text can be dated we need to be a bit more careful when we construct the data for this graph as requesting the “astron_date” or “date” field for each text will result in an error if it’s missing.

author_years = {}
years = []
for _, item in all_texts.items():
    year = 0
    year = get_year(item.get("date"))
    if year == 0:
        year = get_astron_year(item.get("astron_date"))
    if year > 0:
        years.append(year)
        author = item["ancient_author"]
        if author in author_years:
            author_years[author].append(year)
        else:
            author_years[author] = [year]

Next we want to filter and sort our data again:

min_c, max_c = 0, max(author_counts.values())
min_c = 15
author_years = {author:years for author, years in author_years.items() if min_c < author_counts[author] < max_c}
author_years = OrderedDict(sorted(author_years.items()))

And, because we’re going to graph each scholar on a timeline for the entire period, we need the figure out the date of the earliest and latest texts, and do the same thing for each scholar as well.

min_year, max_year, range_years = min(years), max(years), max(years) - min(years)
# author_years_active = {}
# for author, years in author_years.items():
#     author_years_active[author] = [max(years) - min(years), min(years), max(years)]
author_years_active = {author:[max(years) - min(years), min(years), max(years)] for author, years in author_years.items()}

Finally, we’re ready to make our next graph. Following the convention above I define my data first, do some general manipulation, and specific changes to this graph, finally set the title, labels, and graph the actual data. This graph also plots two vertical lines marking the beginning of Esarhaddon and Ashurbanipal’s reigns:

ranges = [years[0] for author, years in author_years_active.items()][::-1]
starts = [years[1] for author, years in author_years_active.items()][::-1]
labels = ["{} ({})".format(author, count) for author, count in filtered_counts.items()][::-1]
# Attempt to make the plot look better:
plt.figure(figsize=(6, 7)) 
plt.style.use('fivethirtyeight')
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.size'] = 12

ax = plt.subplot(111) # remove borders
ax.spines["top"].set_visible(False)    
ax.spines["bottom"].set_visible(False)    
ax.spines["right"].set_visible(False)    
ax.spines["left"].set_visible(False)
plt.tick_params(axis="both", which="both", bottom="off", top="off",    # remove ticklines
                labelbottom="on", left="off", right="off", labelleft="on") 

ax.set_xlabel("Years BCE")
plt.title("Years active for Neo-Assyrian Astrologers")
plt.barh(range(len(author_years_active)), ranges, left=starts, tick_label=labels, alpha=0.75) # plot the actual data

xmin, xmax = plt.xlim() # reverse the x-axis
plt.xlim(710, xmin-5)
plt.ylim(-0.75, len(author_years_active))
ax.axvline(710, color="black", alpha=0.3, linewidth=2)
ax.axhline(-0.75, color="black", alpha=0.3, linewidth=1.5)
ax.vlines(range(705,640,-10), len(author_years_active), -0.75, color="black", alpha=0.3, linestyle="--", linewidth=1)
# Plot individual texts
for i, author in enumerate(list(author_years_active)[::-1]):
    ax.plot(author_years[author], [i]*len(author_years[author]),  'bo', alpha=0.5)

# Line for Essarhaddon and Assurbanipal's accessions to the throne
ax.axvline(680, color="red", alpha=0.3, linewidth=3)
ax.axvline(668, color="red", alpha=0.3, linewidth=3)

plt.show()

Scholars graphed by years active
One of the benefits of this approach is that clear and distinct variable names can easily be re-used later for other forms of analysis. In the process of creating the two previous graphs we also happened to make everything we need to see the distribution of these texts over time for each author.

plt.figure(figsize=(10, 8)) 
plt.style.use('fivethirtyeight')
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.size'] = 12
plt.title("Distributon for texts from Astrologers")
ax = plt.subplot(111) # remove borders
ax.spines["top"].set_visible(False)    
ax.spines["bottom"].set_visible(False)    
ax.spines["right"].set_visible(False)    
ax.spines["left"].set_visible(False) 
plt.tick_params(axis="both", which="both", bottom="off", top="off",    # remove ticklines
                labelbottom="on", left="off", right="off", labelleft="on") 
plt.violinplot(list(author_years.values()), showextrema=False)
ax.set_ylabel("Years BCE")
ax.set_xticks(range(1, len(author_years)+1))
ax.set_xticklabels(list(author_years), rotation=90)

# Line for Essarhaddon and Assurbanipal's accessions to the throne
line = ax.axhline(680, color="red", alpha=0.3, linewidth=3)
ax.axhline(668, color="red", alpha=0.3, linewidth=3)

plt.show()

Distribution of texts over time

Historical Context:

Finally, it’s worth pointing out that all of this data-centered processing can be used to say something about the history that we study. There’s very good evidence that Esarhaddon was rightly concerned about the accession of his chosen heir Assurbanipal (Frahm, 2017, 188-189). Letters and reports detail multiple uprising and attempts to overthrow Esarhaddon’s rule ending in a purge of treasonous high-officials in 670 BCE. This period was also the scene of a highly ambitious attempt by Esarhaddon to get all officials of the empire to swear to respect the transition of power between father and son. Esarhaddon dies in 669, and his son Assurbanipal succeeded him and becomes the last great king of the Neo-Assyrian empire. So whatever precautionary steps and measures Esarhaddon took, it seems to have worked.

With that as a historical background, we can now see the graphs in light of a king’s concern with treason, uprising, and a worry about their heir-designate. In particular the focus of letters from scholars right around the height of Esarhaddon’s struggle to maintain power seem to indicate a preoccupation with figuring out what the stars and other omens could tell him. As an example, the left most name in the above graph is Adad-šumu-uṣur, Esarhaddon’s chief exorcist (Radner, 2017, 221). The preserved texts indicate a clustering of texts from him right before Esarhaddon’s death and Assurbanipal’s accession. It is likely that Esarhaddon was relying on his chief exorcist both to verify the veracity of reports and to double check reported celestial omens. Obviously, a proper attempt at this analysis would want to look at the entire corpus of letters, including the large corpus of extispicy queries. However, this short overview of the evidence gives us a picture of which scholars were writing to the kings and when.

Acknowledgements:
This analysis wouldn’t be possible without the open-access CC-licensed data and framework from the ORACC project. And the data wouldn’t exist without the work of Mikko Lukko who digitized SAA 8 and 10 for the State Archives of Assyria online project.

Bibliography:

Fincke, J. “Assyrian Scholarship and Scribal Culture in Kalḫu and Nineveh.” In A Companion to Assyria, edited by E. Frahm, 378–97. Hoboken, NJ: John Wiley & Sons, 2017.

 

Frahm, E. “The Neo-Assyrian Period (ca. 1000-609 BCE).” In A Companion to Assyria, edited by E. Frahm, 161–208. Hoboken, NJ: John Wiley & Sons, 2017.

 

Hunger, H. Astrological Reports to Assyrian Kings. State Archives of Assyria 8. Helsinki, Finland: Helsinki University Press, 1992.

 

Parpola, S. Letters from Assyrian and Babylonian Scholars. State Archives of Assyria 10. Helsinki, Finland: Helsinki University Press, 1993.

 

Radner, K. “Economy, Society, and Daily Life in the Neo-Assyrian Period.” In A Companion to Assyria, edited by E. Frahm, 209–28. Hoboken, NJ: John Wiley & Sons, 2017.

 

Good Bluetooth Headphones

Good Bluetooth Headphones
Good Bluetooth Headphones

A while ago (Oct 2014) I purchased a pair of bluetooth headphones.  This review, only a year and a half later, is just to say they’re good headphones.  The sound quality is perfectly sufficient.  The battery life seems excellent.  They’re pretty comfortable.

Second Wave Digital Humanities

Screenshot of the Pleiades Project's entry for Rome, a good example of second wave digital humanities.
Screenshot of the Pleiades Project’s entry for Rome, a good example of second wave digital humanities.

Having finished my PhD and now moving on to the next opportunity, I’m beginning to consider the wider academic world outside of my very small discipline.  Because of my future work on the Database of Religious History, the world of digital humanities has become more important in my thinking about my own work and engagement with the wider scholarly community.

Thanks to a talk I attended on the Pleiades Project and gearing up to work for the DRH project I began to think about the progression of digital humanities from a methodology concerned solely with the digitization of textual material for study, to a generative and collaborative way of working online.  I naively called this “Second Wave Digital Humanities” in my own internal narrative as I mused about what these projects look like and might accomplish.  Eventual googling led me to a significant amount of scholarship on this exact topic:

The first wave of digital humanities work was quantitative, mobilizing the search and retrieval powers of the database, automating corpus linguistics, stacking hypercards into critical arrays. The second wave is qualitative, interpretive, experiential, emotive, generative in character. (The Digital Humanities Manifesto 2.0)

In the above quote and other literature the term generative stands out to me as a significant part of what makes these types of projects markedly different from earlier work.  The idea that we are no longer merely presenting data in a new form (the digital part) but rather creating new data through collaborative process.  The collaborative project of both the Pleiades and the DRH with their open contribution systems of editors and content creators has profound implications the creation of academically canonical knowledge.  Opening the doors and letting people contribute using the systems already developed in large projects like Wikipedia are a crucial part of this development.  This is of course coupled with advances currently occurring the world of open-access publishing.

While reading about second wave digital humanities I also came across an article encouraging an exploration of what third wave digital humanities could look like.  The author, David Berry, posits a computational turn, i.e. that eventually digital humanities will change how we actually engage with scholarship and knowledge both at the individual as well as institutional and societal level.  He brings up the idea of streams of data which we now regularly interact with as fundamentally new forms of data not accounted for by the digitization efforts of the earlier digital humanities projects.  It’s all very exciting stuff, and something I’m only beginning to touch the surface of.


Even while reading these articles written less than five years ago I found dead links and expired webpages.  This is a persistent problem in all forms of digital media, one that curtails longer meaningful engagement with scholarship.

Plotting productivity

During the final three months of writing my dissertation I tried tracking my productivity by recording in half-hour increments what I was working on, and whether or not I was being productive.  I kept all this data in a google sheet for ease of access no matter where I was working.  Recording this data actually helped keep me on track, returning to the spreadsheet after 30 minutes of time spend browsing the web, and seeing that the past couple of entries were also “nonproductive” often got me out of a rut and back on track.  I also thought that it would be a fun data set to play with when I had more time, i.e. when the dissertation was done.

Well, now I find myself revising the final draft, after successfully defending it in early December, and desperate for something to distract me from the onerous task.  So I’ve taken to working with the data, using the pandas library for Python.  I completed a coursera course on R a while back, but chose pandas because of a curiosity about data analysis in Python, and familiarity with the language.  What I’ve done so far is the most basic of interpreting and plotting data, but still reveals some interesting patterns.

First, a word about the data. I tried to keep it as simple as possible, recording only the date, time (in half-hour increments), general category of work, whether or not I felt I was productive, and finally the task in simple terms. The biggest issue is obviously the subjective nature of “productivity” but since this was a tool to help me get work done, if I abused that categorization the whole idea was moot. Here’s a small excerpt from the spreadsheet:

Exporting the whole thing to a csv file allowed me to import it into pandas and start playing with the data. The first thing I was interested in, was when was I most productive. That was relatively simple to figure out, by separating out the productive column along with the time and doing a count on total values and the sorting by the time. That produced a graph like this:
Productivity time graph

You can see two obvious peaks in my productivity, one right after I got to the office in the morning when I was feeling optimistic about the days tasks, and a shallower peak in the afternoon post-lunch when I slowly returned to a productive state.  I’m guessing the slower rise in the afternoon is due to the varying times I would get lunch, and the inevitable post-lunch tiredness.

I was able then to excerpt the non-productive time and add it to the same graph, producing this:

Productivity and nonproductive time

Now there’s an interesting point to consider here.  I only recorded nonproductive time when I thought I should be working.  I.e. first thing in the morning I didn’t start marking the time nonproductive if I was eating breakfast and getting dressed.  So nonproductive time in this case represents time when I was sitting at my desk hoping to get work done.  With that in mind, it makes sense that the majority of nonproductive time occurs either side of lunch and in the afternoon, when I was intending to work but distracted.

There’s obviously a lot more to do with this data, next it would be interesting to look at what days of the week I worked best, and how the length of a lunch break correlated with post-lunch productivity, but now I’ve got to get back to work.

Accessing meta-textual information

K 11151 detail
Tables as meta-textual information

 

One the principal cruxes of my dissertation (and probably most theses) is “so what?”  Dissertations are often a chance to collect and curate a large amount of data in new and hopefully meaningful ways.  It’s this “meaningful” result which often eludes the writer until the end of the process.  I hope that it becomes clearer, or perhaps the true result is often obscured by the lengthy process.

In any case, one of the points I’m narrowing down to in my own work is the process of knowledge transformation as evident in changes in textual format and layout, a meta-textual sort of analysis.  A classic problem of looking for evidence of the process in absence of any record of the process itself.  Much like the sediment after a flood, the cuneiform texts I work with show remarkably different configurations from earlier versions.  The swirling eddies of the editorial process are as ephemeral as the roiling flood waters, but they have left their lasting mark.

But what can we say beyond “See, it happened!”  How do we access the meta-textual process behind the evidence?  Here’s where I struggle, and I’m sure through more detailed study of the texts themselves and their antecedents some form of analysis will come forward.  I’m wary of trying to over complicated the situation by trying to trace all the various threads of textual congruency.  This might be productive if all the texts could be weighted equally and accepted as canonical version of the knowledge they represent.  However, the real evidence is far from this idyllic situation.  In reality, the texts are from many different traditions spanning a huge range of time, written under the influence of different geographical and scribal traditions.

I take some comfort in other scholars work on the para/meta-textual analysis of texts offering up at least evidence of the process and the actors:

” … with this approach we can start seeing the agent behind these bureaucratic devices, the scribes who in such minute ways negotiated their presence and transmitted knowledge.” –C. Tsouparopoulou

Productivity: A List

IMG_20150506_114303

 

I started this blog post with the supposition that our relentless search for productivity was still in full swing.  It seems there is no end to the blog posts, magazine articles, and general advice about how to focus and “get things done.”  I know that I’ve certainly been a consumer of probably a large percentage of what’s been written about staying productive and on-task.  I put a couple of terms into Google to look for the overall popularity of searches and was surprised to find that it was not as I had assumed:

This Google Trend graph shows the slow decline in searches for productivity and the classic book Getting Things Done.  As an aside, it’s interesting to note that there are generally two peaks in each year, once the rises with the new year and generally peaks around March.  The second, rises at the end of the summer and peaks in October.  Are we in a post-productive state.  Or, has everyone found what they’re looking for, or rather things that actually work for them.  One of the classic overall tips for productivity has to do with not blindly following a top ten list, but finding the techniques that work for you and your particular style.  To that end I thought I’d list three things that seem to have improved my productivity:

 

  • The Pomodoro Technique: This is a well-known time-management technique where you work a certain number of minutes on, followed by a shorter number off, in cycles with a longer break after a certain number.  There are a wider range of apps, programs, tools available to help guide you while working.  I use a small plugin for Gnome Shell which sits at the top of the screen and counts down, alerting me when the time has finished on a certain segment.  I use this quite a bit when I’m having troubling concentrating on a particular task.  Interestingly, I often don’t even bother to take the breaks (even though it encourages it), just the act of starting the timer is enough to keep me focused for a couple of hours as long as its running in the background.
  • Standing Desk (picture above): I recently purchased an Upstanding Desk and installed it on my table in the office.  What I like about this particular desk is the modularity of it.  The ability to move the various shelves around allows me to changing work habits over time if something seems to be better for my own posture (I recently move the top shelf higher to allow the laptop and external screen to sit together better).  I often will read articles, books, even sometimes websites sitting on a couch but stand when I have to compose e-mails or write.  Moving between the two is also a nice way to stretch and interrupt the workflow.
  • RememberTheMilk: Most people have some sort of to-do list system, either a piece of paper or a dedicate program.  I struggled to find one that worked well for me for a while.  I read all the comparisons and tested out various different ones, finally settling on one of the oldest sites out there.  Initially, I was a bit disappointed that they still had not added sub-tasks to their system (something I find very useful for larger projects).  I e-mailed to ask, and was invited to try to Beta-version, which I’m very happy with.  I’m not sure what can be shared publicly about how it works, suffice to say it’s a huge improvement.  My only concern is that they work quickly and efficiently to make sure that the external tools, apps, etc… work with the new version as soon as possible.  I find that I often use RTM for tasks that need to get done that day as a way of reminding me of their existence.  On occasion I’ll plot out larger tasks with sub-tasks and due dates to schedule work in the future as well.

These three things are what has arisen from too much time spent reading about how to improve productivity.  It’s kind of fitting that they all function in different ways and on different platforms.  The Pomodoro Technique is independent of a computer and works to manage the time spent doing things.  The standing desk serves as a locale for getting work done.  And finally RTM, organizes and reminds me of tasks which need to happen.  There is no real duplication between the three and they all serve their purpose efficiently.  Which I guess is the end goal for any system of productivity.  I guess that’s the take-away from all of this, you need to find the productivity tips, techniques, and tools, which function best together in the way in which you work best.

Chatham Harbor and the future of phone photography

Chatham Harbor
Chatham Harbor Panorama

This is a simple panorama shot from the deck of the loading dock at Chatham harbor out on Cape Cod.  I’m using this image to illustrate the ease with which the Nexus 5 phone takes more complicated shots, beyond the traditional simple rectangular photo.  In some ways these are simple party tricks that make the phone seem interesting, but they have a greater significance.  I’ve now started shooting all of my photos on the phone with the HDR mode turned on.  I find that the effect is not at all overdone, and in fact often gives low light or photos lacking contrast more definition.

Similarly, I’ve really come to like the Photo-sphere mode on the phone as well.  It is a bit gimmicky in that a massive moving picture with low FOV is not appreciating a vista.  However, the really selling feature in my mind is the way in which the phone can be made, via the accelerometer, to move within the panorama.  I’ve taken a few photo-spheres of places in Turkey, Venice, and also at Chatham.  I can give someone the phone and they can stand, and “be”, in the spot where I took the photo.

Both of these newer forms of photography benefit greatly from the relatively mature operating systems present on these phones.  The camera software is able to capture the raw data and then process it into various forms of panorama, or boost the color and/or contrast through increasing the dynamic range.  The next step is how to share these, and not surprisingly social media is well integrated into these platforms as well.  Despite the advances in social media platforms, we’re still stuck in a method of showing photos slide by slide, with captions and comments.  I wonder if this type of presentation is outdated.  In order to display Photo-spheres, or even wide panoramas properly we need a new form of presentation, one that takes into account the varying dimensions of the media.  In some ways google+’s auto-stories is an attempt to solve this problem, but it’s platform specific.

Gingko

I wrote the previous post using the webapp Gingko.  It’s a neat web-based composition tool.  You start with three columns and you can add/write/move around anything between the three.  Items are dependent on the column to the left, so it essentially forms a three tiered tree structure.  It’s really easy to sketch out a very general overview in the left column, add important points to consider in the middle column, and then write out the actual text in the right column.  Since the cells are dependent on the parent cell to their left, it becomes really easy to compartmentalize what you’re writing and also move things around.  It uses markdown for formatting, which is fine, and exports as clear text, html, and presentation.  It’s got a few more features as well, but just the ability to organize and write all in one pane of the web-browser is useful enough for me.

Zenbook Prime UX31A

2013-10-24 08.19.58I recently had to replace my laptop somewhat unexpectedly. I wasn’t entirely happy with the previous machine, it certainly had its problems, but for various reason I wanted to wait a bit longer to buy a new computer.  My previous laptop was a Sony VPCSA, I originally bought because it seemed pretty powerful, had good screen resolution, and was relatively slim. It had a quad-core processor and 4 gb of RAM, which seemed like it was going to be plenty for my needs. Before this my work computer was an old Dell netbook, which was not powerful by any stretch of the imagination.

However, with the Sony, I quickly realized that the battery life was not up to par. I would get one and a half to two hours max out of the machine. This, I soon realized was caused by a larger problem.  The fan on this machine is position right in the center of the back edge of the base. The hinge for the laptop lid is right above, when the lid is open the hinge partially obstructs the fan outlet. I hadn’t noticed this at first, but it soon became very apparent that this computer had major issues with heat. I did some research online and found many other owners who complained about fan noise and heat. One owner posted a small bit of advice that they found in the manual for the machine, explaining that the laptop should not be used for extended periods of time with the lid open! Near the end of this laptops life it was having a lot of trouble even doing simple tasks, lots of pausing and lag on input. I suspect the constant heat, and inefficient fans were slowly melting components and decreasing its overall speed.

So in looking for a new laptop I had some simple criteria, larger screen resolution, and long battery life. That’s pretty much it, most laptops today are powerful enough for word processing and there are plenty of small light laptops.  Looking through the options available however shows far too many computers stuck with 1360×768. There seemed to be a rush to that small resolution a few years ago, probably pushed by manufacturers marketing laptops as wide-screen movie watching devices, rather than work machines.  I was also keenly aware of the problem with the previous laptop, so I was looking for a company that has a bit of a better track record for engineering and design.

I ended up getting an ASUS Zenbook-prime UX31A. The screen resolution is: 1920×1080, which is as large as my desktops primary monitor. The battery seems to get at least 4-5 hours on a charge, if not more. And it’s incredibly light.  I found the choice particularly difficult because there was nothing that seemed to match my requirements perfectly. The incoming Haswell chips are going to greatly increase battery life, but the computer currently available with the chip are too expensive. There are a glut of computers with the aforementioned lousy resolution. The chromebooks look good, but only the pixel has decent resolution, and then its battery life is sub-par. Basically, there were many choices and none of them were right. I’m fully expecting a more perfect laptop to come out in the next couple months.

The first thing I did when I got the new laptop is stick Ubuntu on it. I flashed a bootable USB with the Ubuntu image and booted into the Ubuntu install before Windows 8 even had a chance to touch the silicon on the motherboard. It’s a pain paying for the inherent windows tax, but I think that’ll change in the near future.  One issue I was somewhat worried about was the UEFI BIOS and secure boot. For this computer I had to disable secure boot and fast boot, and enable “Load CSM” then boot the usb drive with the UEFI option. I was a bit worried about wiping and installing, and not being able to boot. But by following the above steps it worked just fine.  The good news is everything just works on the UX31A, previous how-to’s and forum posts about this laptop and Ubuntu list a number of work arounds etc… needed to make everything work in Ubuntu, but these seem to have been included in the latest release.

However, I am having issues with Gnome 3.10. I’ve always liked Gnome, and when Gnome Shell came on to the scene with Gnome 3.0. I embraced the changed. I’ve actually gotten very familiar with it and find it very intuitive and efficient. I’m not sure if it’s just Ubuntu packaging gnome 3.10 in a haphazard way, or the most recent release, but there seem to be a number of problems with it. On my desktop, the secondary monitor resets its screen rotation on reboot. The extension Topicons, if enabled, de-activates all of your other extensions on reboot. On my laptop dragging folders or files makes them invisible. There is no graphical setting to control the keyboard layout anymore, specifically the compose key. These are all somewhat minor problems, but they show a lack of polish that is worrying. I know Gnome is trying to simplify and unify the desktop experience, but I feel like they’re leaving things behind as they do.

Editing Zotero Styles

I’ve been trying to use Zotero for a while now to manage my (growing) bibliography.  I first gave Mendeley a try, I think at the time I was attracted by their standalone client (Zotero was still dependent on Firefox running at the same time).  However, in Mendeley you couldn’t insert page numbers into a reference, which seemed crazy at the time, so back to Zotero I went.  I think Mendeley has since added the feature, but I’ve stuck with Zotero (and was quite happy to see the development of a standalone client soon after).  I find Zotero great when writing papers and articles where the established conventions of citation are rigorously codified in a number of styles.  The recent work I’ve been doing on my syllabus present a bit of a problem though.  I wrote a draft with all my assigned readings as normal Chicago style (author-date) citations surrounded by parentheses.  These looked kind of bad, but I didn’t want to go through the trouble of deleting all the parentheses especially if every refresh of the bibliography (an amazing feature of any bibliography manager) would reset all the citations.

Instead I decided to copy the Chicago style style I was using and edit it to take out the parentheses.  This of course was a bit of a rabbit hole, and I could see it coming.  But I started with the Zotero wiki page on editing, did a quick find for parentheses and saw where they were being inserted for citations and removed them (but kept the parentheses for dates, issue number etc…).  Then I foolishly just tried importing my new style file into Zotero, which resulted in an error.  So I visited the wiki page on validation, which is actually not Zotero but the citation-style-language project, an open-source attempt to create shared xml citation style.  This page led me to two validators, only one of which worked, after a few rounds of validating and fixing errors I was good to go.  I re-imported it, changed the syllabus file to use that style, reloaded, and after much crunching and automated moving about the file re-emerged with all the citations missing their parentheses.  I did a few checks to see if the citations still updated if I changed something in my database, and all was well.

This was an interesting exercise in guided diversion.  I had a problem, managed to fix it after a bit of research and work, and now I’ll have an added tool going forward.  Of course having solved this problem I should return to my syllabus instead of writing this blog post…