tag:wozniak.ca,2013:/posts Curried lambda 2016-05-25T13:09:40Z Geoff Wozniak tag:wozniak.ca,2013:Post/1051460 2016-05-16T01:00:46Z 2016-05-25T13:09:40Z My personal Lisp rejuvenation I was lucky enough to attend ELS this year since I was able to make it coincide with a vacation. Overall, it was a wonderful experience. The talks were interesting, the conversation was invigorating, and the organization was seamless. Kudos to the organization committee for running such a great ship.

In terms of work with Lisp, I haven't done much in the past eight years since I defended my thesis. I did a lot of Lisp hacking back then, but work and other things have prevented me from doing anything with it since. I attended ILC 2014 in the hopes of getting involved again, but sadly, that didn't happen. Things are different now and, after attending this year's ELS, I'm looking forward to contributing in some way.


In general, I was quite happy with the talks. I didn't read any of the proceedings ahead of time and aside from a couple of talks, that didn't hinder my ability to follow what was going on. Most of the presentations were enjoyable and there were only a few moments where my mind wandered into "please end this person's presentation misery" territory.

One talk that stuck with me, in a good way, was Stefan Karpinski's keynote on Julia. It addresses something that has always been a pebble in my programming language shoe: numbers. Do they have to be entirely intrinsic? No. I've seen this before, but it was great to see how Julia attacks it. I really like its approach.

The talk did reaffirm my love/hate relationship with live coding during a presentation. Although mostly canned, there were still those debugging moments and other small disorienting things. It came dangerously close to being a window into someone's coding session instead of a demonstration.

Another idea that connected with me was James Anderson's somewhat confrontational idea of inferring system descriptions (pdf). I've always appreciated attacks on sacred cows and he was unafraid to ask why we are writing system descriptions at all. Build systems and dependency management has always been a fascinating bugbear for me and I'd love to try a different approach. James' idea of making the Lisp system spit out the dependencies seems like something worth trying.

The standout talk for me was Christian Schafmeister's talk about CANDO, not only due to the technical content, but the infectious enthusiasm of the presentation itself. CANDO is a system for creating and working with molecules using Clasp, which is a version of Common Lisp on LLVM – the thing I'm working on these days – that makes interoperating with C++ seamless. Dr. Schafmeister has been working for a long time on this and it shows. This is a project I'd like to work on.


Conferences are all about the chatter and, for not knowing anyone, I'm happy to say that everyone I interacted with was friendly and inviting. I was a bit nervous about just walking up to a table and joining a conversation, but no one seemed to mind and were happy to address "the new guy" without making me feel alienated. The social atmosphere is what really won me over.

Again, many praises to the conference organizers for getting good catering and creating lots of opportunity for conversation. Special mention goes to Jon Atack for organizing a social dinner on Monday night. This was my only chance for an evening social event since I ended up missing the conference banquet.

One theme that I noticed, and quite liked, was an apparent willingness to write programs that would interoperate with outside entities. I got less a sense of that in the past, although it may well have existed. It seemed like people were more willing to use Lisp as part of a heterogenous deployment rather than the "be-all and end-all". Maybe reimplementing some things isn't worth it.

Nitpicks and grumblings

The hacker ethic is alive and well in the Lisp community, but in some cases it was hopelessly childish. At this point, chiding other languages about closures has run its course. There's no need to bring this point up except in cases where communities in other languages insist on taking credit for their existence. I suspect the tolerance of the smug Lisp weenie has worn thin and it probably shouldn't be pushed any further.

Packaging and dependencies is still done poorly and no one can agree on what works. I was in on a few conversations about it and I can't say there was anything close to a consensus. ASDF is important but isn't enough. Quicklisp is great but not sustainable. I'm sure dependency management will be a pain when I start to develop again, but then, I'm not particularly happy with other language approaches either.

There was a lot of talk about better documentation. I heard "we're getting to it" a bit too much to trust that anything will come of it, however. And as much as I sympathize with the UltraSpec, but I don't think official documentation should be in a wiki (although a searchable HyperSpec would be nice).

As much as I hate to admit it, specialization and optimization are important elements of any system or library. It has to perform well in many circumstances. It can't just be possible to do something, it has to be offer a reasonable path to such performance, with documentation or clear coding idioms. I saw a lot of "look what is possible" and not "look how to do it". There was too much focus on extensibility that likely no one would use. Extensiblity should not be a goal in and of itself; it's rarely a selling point.

On a purely personal note, I'm pretty sure I got Robert Standh's cold while I was there. I won't hold that against him, though. ;)

Next steps

I'm going to have a go at doing something to help Clasp. I didn't know about it until the conference and was thinking about CL and LLVM on the flight over, so it seems very apropos.

My connectivity during my vacation was pretty terrible (not the conference or the venue's fault) so I didn't actually get to do anything until I got home – which means back to work tomorrow. Still, I'm excited about Lisp development again and after a few years of work under my belt, I'm sure my views will be a little different this time around, likely for the better.

Geoff Wozniak
tag:wozniak.ca,2013:Post/986266 2016-02-05T02:54:02Z 2016-05-20T17:01:28Z Battlegames of the past Found this in my collection of books.  Almost forgot about it.  This great repository reminded me of it.

Geoff Wozniak
tag:wozniak.ca,2013:Post/790519 2015-01-03T20:28:44Z 2016-05-21T16:12:36Z Why I quit OS X After nearly 10 years of using OS X as my primary OS for personal work, I switched away in late 2014. I consider it to be the best tech decision I made last year.

I started using OS X in 2005 when 10.4 (Tiger) was released. I ditched Linux at the time because I needed to print things and Linux was awful at it; OS X wasn't. I was very productive with OS X and had no serious complaints. When 10.6 (Snow Leopard) came out, I was content.

The pangs of dislike started to show up in 10.7 (Lion). The iOS-like GUI and "features" such as Launchpad didn't resonate with me. As things progressed, I became increasingly annoyed with the environment.

By the time I upgraded to 10.10 (Yosemite), my Macbook Pro no longer felt like a personal computer. Each upgrade was spent fighting the newest bells and whistles in order to keep my environment comfortable and familiar. I spent a lot of time going through the System Preferences, figuring out what I had to turn off in order to get my sanity back.

Furthermore, I found that I had stopped using the majority of the primary apps that ship with OS X: Mail, Safari, iTunes, and Apple Creativity Apps/iLife. For the most part, I ran essentially three apps: Firefox, MailMate, and iTerm2. Most of my work was done in terminals. The culture of the operating system at this point was more about sharing than personal productivity.

In short, I was working against the grain of the environment. It was a gradual transition, but OS X went from a useful tool set to get my work done to an obnoxious ecosystem of which I no longer wanted to be a part.

More damning than the lack of personal connection, though, was the complete lack of transparency and general decline in software quality, as I perceived it.

I basically got no useful information prior to system upgrades. Descriptions like "bug fixes" or "security fixes" say nothing and the links provided weren't much more illuminating. Furthermore, I had no idea as to the size of the download, so I couldn't set any reasonable expectations as to the time I was going to spend waiting.

The 10.10 upgrade was egregious. The fact that the upgrade could take multiple hours due to an incredibly slow directory merge is, simply put, the work of amateurs. Knowing about it ahead of time saved me a lot of frustration, but that kind of thing shouldn't ship. And if it does, at least don't get my hopes up by saying "1 minute remaining" for the better part of an hour.

Messages in 10.10 is a complete shitshow. It's a stunning regression. I gave up on it shortly after Yosemite was installed. The content was frequently out-of-order, mislabeled as new, and the conversation usually unparsable.

There are lots of other little things that irk me: mds being a hog, distnoted being a hog, lack of virtualization, other system services mysteriously firing up, bogging the system down. It doesn't help that the Macbook Pro I have is one of those lemons that overheats easily, thus kicking the fans into "rocket taking off" mode. At this point, my default position on Apple software in OS X has moved from "probably good" to "probably not OK". They seem more interested in pumping out quantity by way of more upgrades. It's death by a thousand cuts, but it's death nonetheless.

After reflecting on all this, I came to the realization that I was frustrated and disappointed by OS X, and I didn't see it getting better. I simply wasn't enjoying myself.

So I quit.

Once I quit, I was met with different frustrations, but they didn't feel like hopeless frustrations.  I've gone back to a desktop system running Linux (for now) and while I consider it markedly inferior to OS X in terms of usability, it feels like a personal computer again.  I'm enjoying the experience and I look forward to working with it, even when it's a monumental pain in the ass.

Maybe I just needed a change of scenery, but I do know that I no longer felt welcome in the OS X world, which is ultimately why I had to leave.

Geoff Wozniak
tag:wozniak.ca,2013:Post/722285 2014-08-03T19:33:10Z 2016-05-20T17:03:11Z What ORMs have taught me: just learn SQL I've come to the conclusion that, for me, ORMs are more detriment than benefit. In short, they can be used to nicely augment working with SQL in a program, but they should not replace it.

Some background: For the past 30 months I've been working with code that has to interface with Postgres and to some extent, SQLite. Most of that has been with SQLAlchemy (which I quite like) and Hibernate (which I don't). I've worked with existing code and data models, as well as designing my own. Most of the data is event-based storage ("timelines") with a heavy emphasis on creating reports.

Much has been written about the Object/Relational Impedance Mismatch. It's hard to appreciate it until you live it. Neward, in his well known essay, lays out many cogent reasons why ORMs turn into quagmires. In my experience, I've had to deal directly with a fair number of them: entity identity issues, dual-schema problem, data retrieval mechanism concern, and the partial-object problem. I want to talk briefly about my experiences with these issues and add one of my own.

Partial objects, attribute creep, and foreign keys

Perhaps the most subversive issue I've had with ORMs is "attribute creep" or "wide tables", that is, tables that just keep accruing attributes. As much as I'd like to avoid it, sometimes it becomes necessary (although things like Postgres' hstore can help). For example, a client may be providing you with lots of data that they want attached to reports based on various business logic. Furthermore, you don't have much insight into this data; you're just schlepping it around.

This in and of itself isn't a terrible thing in a database. It becomes a real pain point with an ORM. Specifically, the problem starts to show up in any query that uses the entity directly to create the query. You may have a Hibernate query like so early on in the project.

query(Foo.class).add(Restriction.eq("x", value))

This may be fine when Foo has five attributes, but becomes a data fire hose when it has a hundred. This is the equivalent of using SELECT *, which is usually saying more than what is intended. ORMs, however, encourage this use and often make writing precise projections as tedious as they are in SQL. (I have optimized such queries by adding the appropriate projection and reduced the run time from minutes to seconds; all the time was spent translating the database row into a Java object.)

Which leads to another bad experience: the pernicious use of foreign keys. In the ORMs I've used, links between classes are represented in the data model as foreign keys which, if not configured carefully, result in a large number of joins when retrieving the object. (A recent count of one such table in my work resulted in over 600 attributes and 14 joins to access a single object, using the preferred query methodology.)

Attribute creep and excessive use of foreign keys shows me is that in order to use ORMs effectively, you still need to know SQL. My contention with ORMs is that, if you need to know SQL, just use SQL since it prevents the need to know how non-SQL gets translated to SQL.

Data retrieval

Knowing how to write SQL becomes even more important when you attempt to actually write queries using an ORM. This is especially important when efficiency is a concern.

From what I've seen, unless you have a really simple data model (that is, you never do joins), you will be bending over backwards to figure out how to get an ORM to generate SQL that runs efficiently. Most of the time, it's more obfuscated than actual SQL.

And if you elect to keep the query simple, you end up doing a lot of work in the code that could be done in the database faster. Window functions are relatively advanced SQL that is painful to write with ORMs. Not writing them into the query likely means you will be transferring a lot of extra data from the database to your application.

In these cases, I've elected to write queries using a templating system and describe the tables using the ORM. I get the convenience of an application level description of the table with direct use of SQL. It's a lot less trouble than anything else I've used so far.

Dual schema dangers

This one seems to be one of those unavoidable redundancies.  If you try to get rid of it, you only make more problems or add excessive complexity.

The problem is that you end up having a data definition in two places: the database and your application.  If you keep the definition entirely in the application, you end up having to write the SQL Data Definition Language (DDL) with the ORM code, which is the same complication as writing advanced queries in the ORM.  If you keep it in the database, you will probably want a representation in the application for convenience and to prevent too much "string typing".

I much prefer to keep the data definition in the database and read it into the application.  It doesn't solve the problem, but it makes it more manageable.  I've found that reflection techniques to get the data definition are not worth it and I succumb to managing the redundancy of data definitons in two places.

But the damn migration issue is a real kick in the teeth: changing the model is no big deal in the application, but a real pain in the database.  After all, databases are persistent whereas application data is not.  ORMs simply get in the way here because they don't help manage data migration at all.  I work on the principle that the database's data definitions aren't things you should manipulate in the application.  Instead, manipulate the results of queries.  That is, the queries are your API to the database.  So instead of thinking about objects, I think about functions with return types.

Thus, one is forced to ask, should you use an ORM for anything but convenience in making queries?


Dealing with entity identities is one of those things that you have to keep in mind at all times when working with ORMs, forcing you to write for two systems while only have the expressivity of one.

When you have foreign keys, you refer to related identities with an identifier. In your application, "identifier" takes on various meanings, but usually it's the memory location (a pointer). In the database, it's the state of the object itself. These two things don't really get along because you can really only use database identifiers in the database (the ultimate destination of the data you're working with).

What this results in is having to manipulate the ORM to get a database identifier by manually flushing the cache or doing a partial commit to get the actual database identifier.

I can't even call this a leaky abstraction because the work "leak" implies small amounts of the contents escaping relative to the source.


Something that Neward alludes to is the need for developers to handle transactions. Transactions are dynamically scoped, which is a powerful but mostly neglected concept in programming languages due to the confusion they cause if overused.  This leads to a lot of boilerplate code with exception handlers and a careful consideration of where transaction boundaries should occur.  It also makes you pass session objects around to any function/method that might have to communicate with the database.

The concept of a transaction translates poorly to applications due to their reliance on context based on time. As mentioned, dynamic scoping is one way to use this in a program, but it is at odds with lexical scoping, the dominant paradigm. Thus, you must take great care to know about the "when" of a transaction when writing code that works with databases and can make modularity tricky ("Here's a useful function that will only work in certain contexts").

Where do I see myself going?

At this point, I'm starting to question the wisdom behind the outright rejection of stored procedures.  It sounds heretical, but it may work for my use cases.  (And hey, with the advent of "devops", the divide between the developer and the database administrator is basically non-existent.)

I've found myself thinking about the database as just another data type that has an API: the queries.  The queries return values of some type, which are represented as some object in the program. By moving away from thinking of the objects in my application as something to be stored in a database (the raison d'être for ORMs) and instead thinking of the database as a (large and complex) data type, I've found working with a database from an application to be much simpler. And wondering why I didn't see it earlier.

(It should be made clear that I am not claiming this is how all applications should deal with a database.  All I am saying is that this fits my use case based on the data I am working with.)

Regardless of whether I find that stored procedures aren't actually that evil or whether I keep using templated SQL, I do know one thing: I won't fall into the "ORMs make it easy" trap. They are an acceptable way to represent a data definition, but a poor way to write queries and a bad way to store object state. If you're using an RDBMS, bite the bullet and learn SQL.

Geoff Wozniak
tag:wozniak.ca,2013:Post/722267 2014-08-03T01:53:17Z 2014-08-03T01:53:17Z An experience report on the social theorem of web services Earlier I theorized that after a web service you find useful becomes "social, and a lot more fun", it becomes something that you might, occasionally, find useful.

In the case of Prismatic, this seems to have borne itself out. It went from providing interesting links to pushing clickbait. I can't be bothered to try and train it any more.  Maybe I'm a curmudgeon?  I may have to adjust my theory to take that into account.

My kingdom for an editor.

Geoff Wozniak
tag:wozniak.ca,2013:Post/633704 2013-12-23T03:06:54Z 2015-01-26T21:08:57Z A social theorem of web services Let X be a reasonably new web service that you find useful, enjoyable, and would consider paying for. The following statement, with a probability of nearly 1.0, will be issued by that service.

The big change is that X is social now, and a lot more fun.

X is now a service that you might, occasionally, find useful.

Geoff Wozniak
tag:wozniak.ca,2013:Post/632245 2013-12-19T00:47:29Z 2016-05-20T17:03:46Z My Twitter account was hacked. Thanks, Adobe! This past weekend, I checked my Twitter feed and I noticed I was suddenly following a bunch of accounts that I had never seen before. I immediately sensed something was up, which was confirmed when there were posts from me in Russian (note: I don't speak, nor can I write, Russian).

I immediately changed my password and revoked access to most apps that could access my Twitter feed. Changing my password ended up being a bit of a chore, but it eventually happened. Things have settled down and are back to normal.

What was particularly interesting about this was that right after I changed my password, I got an email from Adobe:

As we announced on October 3, Adobe discovered sophisticated attacks on our network involving the illegal access and removal of a backup database containing Adobe IDs and encrypted passwords. We are writing to let you know that your Adobe ID was in the database taken by the attackers -- but, importantly, your current password was not. As a result, we did not reset your password. We have no reason to believe that your Adobe ID account is at risk or that there has been unauthorized activity on your account. The database taken by the attackers came from a backup system that contained many out-of-date records and was designated to be decommissioned. Adobe’s authentication system of record, which cryptographically hashes and salts customer passwords, was not the source of the database that was taken. (Emphasis mine.)

How sweet of them to tell me this in a timely fashion. It's only been two and half months.

I heard of the (egregious) Adobe breach, but to my knowledge, I didn't have an Adobe account. Apparently I did, so I changed the password on it anyway, but Adobe's wording has me wondering a few things.

  • Was my account "out-of-date" and about to be decommissioned?
  • Were the "out-of-date" credentials hashed and salted?
  • What constitutes "out-of-date" and what is the decommissioning process?

I highly suspect that the Adobe breach was the source of the hack here, and it's my fault for recycling passwords. I use a password manager but I probably created that Adobe account before I started using a password manager. My fault for not changing my Twitter password sooner, but Adobe sure didn't help matters.

Lesson learned. Slowly go through the hundreds of accounts I have, audit the passwords or delete the account where applicable.

And don't trust Adobe.

Geoff Wozniak
tag:wozniak.ca,2013:Post/610168 2013-10-18T00:12:07Z 2015-01-14T19:16:31Z Science and business Although I work in the software industry, I am an academic at heart, so I keep abreast of what's going on in the world of research. I'm also interested in the cozy relationship that academic institutions and businesses are being encouraged to undertake here in Canada.

A recent article in The Economist describes some of the problem with positive results oriented science, and notes

On data, Christine Laine, the editor of the Annals of Internal Medicine, told the peer-review congress in Chicago that five years ago about 60% of researchers said they would share their raw data if asked; now just 45% do.

I wonder, if I'm a researcher being funded by a company (perhaps being viewed as labour with good tax benefits), would I be willing (or even allowed?) to share data that may be generating revenue for said company?

I have my doubts.

Geoff Wozniak
tag:wozniak.ca,2013:Post/609742 2013-10-17T01:10:47Z 2015-01-06T03:15:05Z Theory and practice (prelude) I've often thought that if I were to do another degree (or even just take some more courses), it would probably be in economics.  I think that's because from what I've read, it's similar to computer science in that there seems to be a vast gulf between theory and practice.

I listened to an interesting lecture by Ha-Joon Chang that made me even more interested: 23 Things They Don't Tell You About Capitalism and what they mean for our economic prospects.

While the title is a little "folksy" for me, the content was worth listening to.  Dr. Chang seems like a pretty reasonable person and the beliefs he claims are commonplace in economic thinking don't fit with what I've seen.  Chang's approach seems like it would make sense as opposed to the dogma I hear about the topic.

It reminds me a lot of software engineering.

Geoff Wozniak
tag:wozniak.ca,2013:Post/609371 2013-10-16T00:09:39Z 2016-02-04T22:47:49Z Taking notes Eugene Wallingford wrote something that struck a chord with me with respect to taking notes:

Foremost, having no laptop affects my blogging. I can't take notes as quickly, or as voluminously. One of the upsides of this is that it's harder for me to distract myself by writing complete sentences or fact-checking vocabulary and URLs.

In my experience, electronic note-taking is more of a distraction than it is helpful. Taking notes on a laptop (or worse, a phone or tablet) reduces me to focusing on specific phrases instead of thinking about the bigger picture. This is true regardless of medium: attending a lecture, a meeting, or a reading a book.

The biggest benefit I get from lectures/meetings/books is connecting the ideas presented with my experiences. I'm working my way through Popper's The Logic of Scientific Discovery and while it's a demanding read, I don't take any notes (I can always go back and re-read it!). It's often the case that I will read five to ten pages, then dwell on it for days on end, relating it to my work in software development, specifically testing practices.

Burying myself in minutiae of a presentation causes me to get hung up on that minutiae. Getting a larger view is usually more useful. (And it is often the case that you can go back and review the little things anyway. For example, lectures aren't as ephemeral as they used to be.)

Geoff Wozniak