Day 22 - The nth Day Of Christmas

How many presents were given, in total, on the 12th day of Christmas, by the so-called "true love"? How many for the nth day?

For each day we know that each other day was done again, so we have a shape like this:


Each column is as tall as the number of rows, and the number of rows is 12.

This means the 1 column is 12 tall, the 2 column 11, and so on.

This is 12 * 1 + 11 * 2 + 10 * 3 ...

That's boring. That's not what computers or maths are for. Let's generalise.

We can see that each section of the summed sequence follows a pattern of x * y, where x + y = 13.

It is common, when analysing sequences, to forget that the order matters, and the row number can be used as a variable. If we call that variable i then each section is (13 - i) * i, and the total is the sum over 1, 12.

  Σ (13 - i) * i

13 is suspiciously close to 12. What happens if we do this?

  Σ (12 + 1 - i) * i

And then replace the 12 with our n to answer "What about the nth day?"

  Σ (n + 1 - i) * i

Does it work? Let's Perl it up. Each value of (n + 1 - i) * i can be produced by a map over the range 1..$n, using $_ in place of i, since that's exactly what it is in this case.

sum0 map { $_ * ($n + 1 - $_) } 1 .. $n

sum0 comes from List::Util, and does a standard sum, except the list is assumed to start with zero in case the list is empty - this just avoids pesky warnings.

Try it. Using $ARGV[0] for $n we can give our n on the command line:

perl -MList::Util=sum0 -E'say sum0 map { $_ * ($ARGV[0] + 1 - $_) } 1 .. $ARGV[0]' 12

Vary the 12 to solve for different values of n.

The answer, incidentally, is 364.

Day 18: The URI

I've talked a lot about this resource-first way of dealing with the web, and really the internet in general, but it isn't a tool that fits all things. For instance, today I was looking at the point-of-sale module in Odoo, which is essentially an HTML representation of the index resource of the products in the system, but is actually more complicated than that, because it includes that resource, a numeric input box, the bill of items so far, a search box, and a few other twiddly bits to improve the cashier's use of the system. Plus, it is designed with tablets in mind.

This is quite different from the list of products you get when you look for the list of products in Odoo itself.

However, we must construct a URI that refers to this view of the data if we're to be able to access that view of the data in the first place. That means that we somehow have to shoehorn this not-a-resource idea into the everything-is-a-resource idea.

Today I'm going to deconstruct the URI and explain how each part can be used, in order to avoid too much in the way of special behaviour. Ideally we'd like every resource to be represented by a single URI, but that's clearly not going to work.

Allow me to state up front that I consider Odoo's URI scheme to be utterly shocking. But it appears to be a legacy from back in the old days when more people made web things than really understood what URIs were for.


The URI is made up of several parts. Here is what I consider to be the simplest URL that contains all common parts1:

   1    2     3     4   5      6     7      8             9
  • 1. Schema
  • 2. Subdomain
  • 3. SLD2
  • 4. TLD
  • 5. Port
  • 6. Resource (type) name
  • 7. Resource (instance) identifier
  • 8. Query string
  • 9. URI fragment

Together, 2, 3 and 4 comprise the hostname; 6 and 7 are the path.

Breaking down the URI


The schema is the first place where you restrict yourself. Often referred to as protocol, the schema usually determines how the URI should be used. In this example http is the assumed protocol by which web requests are made. The http schema tells the client to use the HTTP protocol to make the request.

This is very useful because it means we can immediately assume a large quantity of knowledge about the system that we wouldn't have without the schema. Particularly useful is that we know what sort of programs can be used to actually access this URL3. This is, if you think about it, what the word protocol means: it is those things that are assumed to be the case, given a certain situation. When we all follow protocol, we don't need to explain why we're doing what we're doing.

Mostly we come across URLs specifying the HTTP schema; in fact, it's assumed, in many cases, that a URI with no schema is an HTTP URL, because if you click on it, it opens up in your browser. However, some places have started using their own schemata, such as the spotify: schema, which opens URLs in the Spotify client, or the steam: schema, which opens things with Steam.

It's worth noting that the entire hostname can also be omitted from a URI, but this usually means you get three slashes, not two. This is commonly seen with the file protocol, such as file:///home/user/documents/example.html; where the third / is actually part of the path. For this reason it can be observed that the steam: schema does not quite follow the normal URI standards, since the part immediately following the schema is an action - arguably a resource - and not a hostname.

By inventing our own schemata like this we can create entire applications with a new way of communicating, but we're focusing on the web here, which means we're going to use HTTP(S), like it or lump it.


The term "subdomain" is a bit of a colloquialism. Each section of the hostname is a subdomain for the part to the right. The host name is a hierarchy with, in this case, com at the top. We usually call this part the "subdomain" because it's the first subdivision that is really relevent to a human.

When we have a subdivided subdomain we sort of stop talking about them and start mumbling and saying "that bit" and pointing.

The subdomain is a tool we can use to do many things. Traditionally the web is in the www subdomain, but the http protocol is usually sufficient to assume web, these days. However, that's starting to change, as we start to send non-web things over HTTP. These non-web things are, e.g., the API, or the CDN.

Really consider using an api subdomain for your API. You'll find that if you have an api and a www, then your website can have, in the majority, the exact same URI structure as the API. This is more often the case than it appears to be, because people don't tend to think of their web pages as representing a resource in HTML format.


The SDL is the part of the domain that really, to a human, represents where the site is. This is usually your company or organisation name, or some other thing whose entire purpose is to say what this whole web site is about.

You can install a system under multiple domains and thus they would all have the exact same URI scheme, except that, because they're in different places, the records that you get would be different.

Because is not the same person as, except by coincidence.

I've lumped the TLD in here too, because the TLD is, to most people, part of your domain name - which is why we call the subdomain the subdomain regardless of where it appears on the actual hostname.


When designing URI schemes it's helpful to drink a lot of port, for inspiration.

Commonly there are alternative services associated with your website, meaning they're on the same domain, and you can't use the subdomain because these other services need api and www subdomains of their own.

One trick is to mount these services under a part of the path, and consider them a big resource with sub-resources; but easier is to install them on a different port.

For example, your Elasticsearch instance - which communicates entirely via HTTP - can be running on the same hostname as your website, but a different port. Elasticsearch's default port is 9200, going up to 9300 as you add instances on the same machine.

Resource name

The first part of the path of the URL I'm calling the resource name. That's because this is where the actual resource you're requesting starts. Everything before the path is defining whose resource you are asking for, but once the path starts you're starting to get a handle on the actual information.

The resource name, when requested, can have multiple behaviours, depending on the purpose of the resource, but common is simply to be an index of all the items of that type. Since that can be cumbersome, it is perfectly legitimate to both paginate this list and summarise the entries. That sort of stuff is well out of scope of this article, though.

Other uses of the first part of the path are organisational, and may be handled better as a subdomain. For example, having an api part of the path here is not as useful as it would be to have an API subdomain, because if the paths to the resources can be consistent then we don't have to ask questions about what they should be.

Other times, you may want to use a different port. For example, if the web stuff is on port 80 then the administration part could be on port 8080. This also allows you to control access to the different parts of the site at the kernel level, using routing rather than soft authentication.

Doing this also means that it's harder to guess the correct path to the admin area, since you can use an obscure port. Denying access based on IP rules means you'd never report to unauthorised users when they guessed right in the first place.

But really, there's no exact reason why you would or would not add parts of the path to the URL in order to divide it up into separate logical zones. This can certainly help with human comprehension of the purpose of your URL. Sometimes you may even want to provide dummy paths - paths that refer to the same resource as other paths, but assist with conceptual compartmentalisation by having different subpaths.

In these examples, the first part of the path could be omitted, provided that post is always the blog post and product is always a shop product. Consider also that you could still use subdomains for these.

The important part would be to ensure that your uses are consistent. Always have each part of the URL refer to the same logical division of your resource structure.

Item ID

Once you've decided at which point of the path to put the resource type, you should probably put the next part as an optional ID field.

The combination of a resource name and an item ID should be entirely sufficient to retrieve all the information about that specific instance of that type.

This is a reasonably central principle to the resource-first model of your system - all your things have a type and an ID and that's all you need to provide to retrieve it, or at least a representation of it. Everything else is your organisational whimsy and the system really shouldn't have to know.

More formally than dismissing it as whimsy, I should point out that even the type names and shapes can change, and that's difficult enough to deal with. Every level of organisation you add on top of this is another changeable shape of the system that at some point you're going to have to adapt. The fewer of those you have, the better.

The actual format of your identifier is up to you, but there's really nothing else you can put after the resource name that is relevant at this point.

Query string

If I catch you using a query string to tell a dynamic resource to load a specific other resource I will murder you in your sleep.

Seriously, this sort of crap is all over the internet. Yes, it's usually PHP.

You are using a URI - at least put the resource identifier in the resource identifier.

It is important to note that the query string is not the same thing as the "GET parameters". A query string does not have to be in the format key=value&key=value - the web server passes the query string straight to the app, and it is the application that decodes it in its own way. It is common to use the key=value&key=value structure but not required.

The query string's most obvious purpose is to pass a query to a resource that expects one, or that at least accepts one. Often the index resources will allow for some sort of search or filter functionality, and if that's not the case then special resources designed to search and filter - and possibly concatenate - other resources will accept search parameters.

Further specialisation of resources would not even use the KVP format of "GET parameters", and simply take the query string as instruction. These types of resource are drifting away from the "object" type of resource and moving towards "function" resources, which are a separate discussion.

The thing about the query string is that it is usually only relevant to GET requests, which is why it is sometimes called the GET string. But GET is an HTTP verb and the query string is part of the URL; and URLs don't have to be http://, so the query string can really be used against any scheme.

It is often said the query string should not be used to send data to the server, but I'm really not sure that's the case. The server should not store data as the result of a read request (HTTP's GET), but it is welcome to store data as the result of a write request (HTTP's POST or PUT). In which case it is entirely up to the server the mechanism by which the data are provided to it.

These are why you should call it the query string, not the GET string.


The part of the URL after the # is called the fragment. This is not actually part of the resource identifier, but is provided for the client's benefit.

If you click on any of the footnote marks in this document4, most browsers I give a toss about will jump to the footnote, and back again when you click on the number of that footnote.

No new page request is made. The browser is not being instructed to access a different resource. In the example earlier, the fragment is #part-of-document. The fragment is usually used to refer to a part of the document. In HTML and XML, this is either by the id or name attributes of the elements.

In this document, the a tags that jump around the page have name attributes that the browser uses to scroll to them when the URL fragment changes, i.e. in these blog-post resources, the parts-of-the-document that I refer to with URL fragments are the footnotes and the places the footnotes refer to.

Using the document fragment to refer to specific resources is a crime committed by many "JavaScript apps" today. The reason this is a crime is that it is not identifying the resource; it's identifying the resource proxy, which means the correct client must be used to actually access the resource itself. It's like having a proprietary browser that only understands a completely different URI format.

It's a crime because browsers are more than capable of intercepting URI requests inside an application and getting the application to update as necessary, and servers are more than capable of returning a javascript-app-with-resource-in-it as the HTML representation of the resource.

There is no reason besides lack of imagination to trample all over that URI system just to avoid reloading the page every so often.


Not mentioned is the idea of a "related resource". This can be a third part of the URI path whereby you request an index of a separate resource based on the current one:

This is, conceptually, the same as

but you may wish to return the results differently, e.g. with more expanded objects rather than just URLs to the results.

In upcoming posts we'll probably have a look at those "functional" resources I mentioned in passing. This post has been entirely about "object" resources, i.e. those resources that simply represent some representation of a real-world object, or a fake-world object, but ultimately something that can be represented as a JSON object with fields and values. I will also try to discuss the resource-first view of website building using the aforementioned point-of-sale in Odoo as an example.

We also haven't discussed how it is that you would relate resources to one another in knowable ways. This ties in with the hyperlink concept and is the thinking behind Web::HyperMachine - HTML pages are already linked together with <a href="related-link">, but there are myriad other ways even those use hyperlinks to refer to other resources, and even more ways in HTTP itself.

1 I've omitted from this the user:pass@ part that can be used before the hostname, because it's not very common.

2 The "second-level domain" is colloquially the "company" part of the name, i.e. the first part that actually identifies at a human-readable level what it is the URI refers to. In some cases, such as, the TLD is actually the SLD (co) and TLD (uk), and it is the third-level domain that is the company part. Colloquially, we can refer as a TLD, so that this remains the SLD.

3 A URL is basically a URI that you can actually use. That is, there exist URIs that refer to resources but that cannot actually be used to access that resource; for example the ISBN URI schema cannot be used to get an actual book.

4 Like this one.


Day 17: A complex and detailed investigation into the various merits and faults of the assorted combinations of codepage, character set and byte encoding of human-readable text.

There are 127 characters in ASCII and tens of thousands of characters in the real world. It is probably an interesting debate, trying to come up with the most efficient way of encoding non-ASCII characters without screwing everything up.

Don't waste your time. Use UTF-8 and Unicode.

"But what about UTF-16?" No.

"But what about--" NO.

ASCII is included in UTF-8 Unicode. So is everything else. Everyone understands it, everything's assuming it, and all the other encodings and charsets are more obscure and therefore harder to deal with.

Everyone (except PHP) has UTF-8 Unicode built in to whatever programming language they're using.

Unless you're writing for devices with memory measured in bytes and a network connection measured in baud then you have time and space to use the bloating of UTF-8 Unicode. So suck it up, be inefficient, and accept the VHS of UTF-8 over the Betamax of whatever you're looking all cow-eyed at today.

And, in case you were wondering, ASCII is never the right answer.


Day 16: Web::Machine

Web::Machine is pretty cool because it reorganises the way you think about your website's structure, focusing on the perspective you should really be starting with in the first place.

Web::Machine encourages you to construct several objects, each of which handles a URI by representing the resource to which that URI points.

Remember that URI is a Uniform Resource Identifier. We've had this discussion. The parts of the internet that use URIs are based on the assumption that they are sharing information about resources, and hence the focus is on the resource.

Web::Machine starts with the resource. You construct an object and mount it as Plack middleware to handle the URI to that resource. These objects are actually the machines. You construct a Web::Machine with a subclass of Web::Machine::Resource, and if that's all you want to do, you call ->to_app on it and plack it up.

Each Web::Machine so constructed is a Plack::Component. That means you can bring in a Plack::Builder and mount machines in it.

my $builder = Plack::Builder->new;
    '/resource' => Web::Machine->new( 
        resource => 'MyApp::Resource'

Alternatively, you might prefer to use something like Path::Router, providing subs that build Web::Machines based on arguments.

my $router = Path::Router->new;
$router->add_route('/resource/:id' => sub {
    my ($req, $id) = @_;
        resource => 'MyApp::Resource',
        resource_args => [
            id => $id,

Two things are notable about this particular invocation. First, it is necessary to run call on the resulting machine manually. The second is that, now that we have actual args coming in, we're seeing how Web::Machine takes an array ref for these, not a hashref; i.e. it's an argument list and not required to be hash-shaped.

MyApp::Resource is what handles the actual magic: Web::Machine expects certain subroutines to be overridden from the base class Web::Machine::Resource that define what this resource can do.

The sensible ones to provide are content_types_provided and the to_* filters that define how to represent this resource as the various content types it supports.

The documentation lists all of the functions that can be overridden to provide behaviour specific to this class.

RFPR: Web::HyperMachine

I've started taking this a step further. Resources are only part of what makes the interwebs work. The other part is the fact the resources are related to each other: hypermedia.

Up on the githubs is a start to the module Web::HyperMachine, which tries to wrap Web::Machine in an understanding of how the resources relate to one another. By adding a couple of DSL-like functions to the Resource class it is possible to automatically construct the URI schema for the system, using the declared names of resources and relationships within the resource classes themselves.

The user simply mounts those resources and the machine does the rest:

use strict;
use warnings;
use Web::HyperMachine;

my $app = Web::HyperMachine->new;

And the resource would be e.g.:

package MyApp::Resource;
use strict;
use warnings;

use parent 'Web::HyperMachine::Resource';


our @data = qw( hello hi hey howdy );

sub content_types_provided { [{ 'text/html' => 'to_html' }] }

sub fetch {
    my ($self, $id) = @_;
    return $data[$id];

sub to_html {
    my $self = shift;
    my $resource = $self->{resource};

    q{<h1>} . $resource . q{ world</h1>}


If you plackup that script, you'll find that /resource/01 will return an HTML page with "Hello world" in it; and other values will correspondingly index into the array.

Feedback on this concept is encouraged; it's not been worked on for some time, like most things I do, because I got bored of it, because I didn't have an actual use for it.

1 If 0 doesn't appear to work, you may have an outdated version of Path::Router. The issue tracker says it is fixed on CPAN now.


Day 15: Crime and Punishment

In today's post I'm going to try to convince you to think of the interfaces you make in terms of punishment, in order to find the path of least punishment.

Here's a perspective for you to consider: when someone uses your system, they are doing you a favour. Don't try to yes-but-what-if your way out of this; I'm not asserting that it is the case. I am saying that is how you should consider it to be. Assume that the user, given the option, will pick an alternative system. Design the interface from the point of view that it is the very fact people use the system that is the currency that measures its success. If people don't like using it, if you make it hard to do, they simply will stop doing so.

This is an important perspective if you are a business, because your system needs to get the user from state 1, wherein they have their money, to state 2, wherein you have their money. If you make that difficult to do, then they won't do it. You are not doing them a favour; don't treat them like you are.


Punishment probably makes you think of unwanted tasks doled out to people for correction or restitution of some misdemeanour or other. This is a bit of a goal-oriented definition, because it implies a perpetrator in the first place; i.e. it expects that some misdeed has been undertaken for which recompense needs to be made.

People are, of course, falsely accused and given punitive action nevertheless. The focal point of the above definition is that of an unwanted task; some chore that must be gone through, which one is inconvenienced, perhaps embarrassed or humiliated, to do. The concept is one of a strong antipathy or disinclination to do the thing; hence it is considered punitive to require that the person do it.

Crime and Punishment

When you design an interaction between a human and a computer you are establishing a sequence of events that will allow the user to eventually find themselves in a situation whereby the thing they set out to do has been done. Within this highly abstracted scenario there are three players:

  • You (the entity with which the task is being performed)
  • The user (the entity trying to perform the task)
  • The task (the sequence of events by which the thing moves from not-done to done)

This set of three players has implied with it several types of tasks:

  • Expected but trivial; these things do not inconvenience
  • Expected but undesirable; the user has prepared for this
  • Unexpected but trivial; these things are minor inconveniences
  • Unexpected and undesirable; necessary evils
  • Unexpected and undesirable and avoidable; punishment

When you design an interface and you've added something to that interface, seriously consider whether that thing can be considered punishing the user for something they didn't do wrong.

Especially consider whethere it is punishment for something out of their control. In many cases it is necessary to inform the user that there was a problem; this may seem like punishment, because it is quite undesirable to have to go through all that again.

Well, it is. Reduce the impact of problems by not discarding all the information the user has entered. If the problem is on your side, don't force the user to pick up the pieces, because they won't. If the problem is on their side, only require the re-entry of that information - not the entire thing.

And if there isn't a problem, why are you making one?


Amazon punished me recently. They have this 1-Click registered-trademark button that allows you to find something you want and have it on its way to you just by pressing a button. That's a great feature - they are absolutely doing me a favour by having it. And they do me a second favour by letting me amend the order for up to 30 minutes after it's created.

Then they punish me for wanting to do that.

If you try to change the delivery address of such an order you are required to "confirm" your payment details. Why? They told me (on Twitter) that it was a security precaution to prevent others from accessing my personal information.

What utter, rotten bullshit. This is rubbish design, pure and simple. If I didn't change my delivery address, I would not have to confirm anything! This is unexpected, undesirable, and completely avoidable. It is punishment for wanting to have it delivered somewhere else. That is not a punishable offence.


I get very upset sometimes. SimplyBe are absolutely not the sort of company that want me to give them any money. Every single step in between me selecting a product and me paying for the product was a pain in the arse.

Here are the necessary evils of buying something online:

  • Entering your payment details
  • Telling them where to send the product

That is it. Everything else beyond that is you not doing me a favour. Sometimes we accept certain things, like do you want to sign up for the newsletter? (No.) But there are really only two things a place needs to know about you in order to get your money from your pocket and into theirs. If they punish you for trying to do that, go somewhere else.

For the curious, my tirade can also be seen on Twitter, written live as I came across the problems with the checkout. Finding it is left as an exercise to the reader. Every single tweet in that set is about something I consider a punishment, and I consider myself as having been punished for wanting to give them money.

Metro 2033

I first started thinking about interfaces in terms of punishment while playing this game, Metro 2033, of which many readers may have heard. It was touted as one of the best games of whatever year I missed it in when it first came out. It's set in the subway of Moscow - the Metro - where humanity has retreated from whatever disaster has yet to be revealed.

The game goes, by stages, from stealth to survival to legging it to brawling to just wandering around in a township buying stuff. And it punishes you.

Progress in the game is saved by a checkpoint mechanic, although it doesn't tell you where the checkpoints are. All you know is that, if you die, you're going to be set back some arbitrary distance; although once you've failed once, you know where you're going to go back to.

The game is therefore, at the abstract level, a series of challenges that must be overcome in order to progress; failure in a particular challenge sets you back to, at best, the start of that challenge or, at worst, the start of the level. You don't know where until you fail a challenge, but when you've failed a challenge you have some idea of the new worst-case scenario.

The problem is that some challenges are more, well, challenging than others, but failing them causes you to have to repeat the less obnoxious ones in order to retry the difficult one. In a save-when-you-want game you would simply save before you reached the difficult challenge, in order to avoid repeating the easy ones more than once.

This reduces the easy challenges to chores, trivial tasks that you gradually become adept at and simply have to slog through to try the part you keep failing at, until eventually you find the secret to the difficult part. This quickly stops being entertaining.

Games should not be chores. Chores are punishment.

Incidentally, the game (so it calls itself) has another punishment mechanism: traps. Consider the welcome form of punishment, whereby you are set back for failing a challenge - this is the expected function of a game, since a game is supposed to be entertaining by presenting a challenge, and a challenge you can't fail is not a challenge at all. The trap I'm talking about is not a trap for the character in the game, but a trap for the player. In the game, traps are visible and have a disarming mechanism; but traps for the player are unexpected, random events. Unexpected, undesirable, but avoidable by the designer.

Twice, so far, the game has required me to be discreet, quiet, stealthy - this means light off - and then punished me by leaving traps in the dark. Things I cannot have avoided by using skill - points in the game where the only two approaches to the challenge would have caused me to fail. Damned if you do, and damned if you don't. The only way to beat the challenge is to have failed it at that point once already. How do I know there won't be another trap ahead? This challenge has become a chore.


Maintain flow. Most of the things I've listed as examples of punishment are flow-breaking. Most of the time, the user doesn't want to have to know how to perform the task; they need to be prompted to enter information, and as little information as possible. Every step along the way is a step further away from them achieving their goal, and the value of your system is entirely measured in how many people use it to achieve their goals.

Common punishments include:

  • Forcing the user to manually type information they use a computer to automate in the first place (autofill forms, or refusing to let me paste my generated passwords into the confirmation box).
  • Repetition of trivial tasks that shouldn't have to be done at all.
  • Requirement of information you don't strictly need.
  • Considering valid data to be invalid because your validation is broken (or vice versa).
  • Similarly, rejecting sensible input because you're scared of it (like most of my randomly-generated passwords).
  • Pretending to let you do something, and then moving the goalposts and not actually doing it.
  • Not providing sufficient information to help the user rectify the problem.
  • Fragmenting input forms across multiple pages.
  • Cramming a single page with too much input.
  • Discarding information because your fragile system shat itself.
  • Choosing difficult fonts and colours to read.
  • Making the user hunt for the next thing they have to do.
  • Related, leaving the user at the end of a process with no confirmation or failure message, so they don't know that they're done, or feeling that they have to do it all again.

I'm sure if I use the internet for another day I'll be able to double this list but you get the idea. For every action the user has to take, is it something they've prepared for, and do they actually have to do it?

1 [sic]

Day 12ish: PERL

PERL is wrong. It was invented at some point to mean Practical Extraction and Report(ing) Language but Perl was never called that originally.

Although I do quite like the interpretation Poor Excuse for a Real Language, which unfortunately doesn't initialise to PHP.

There's also a swathe of awful, ancient code written in Perl.

This legacy dogs Perl's steps, despite the recent rise of Perl like an X-Wing rising out of Dagobah swamps.

Thus I propose a naming convention: Anything that can be considered to be dragging Modern Perl down be referred to as PERL code. It's clear how PERL is indeed a pathetic excuse for a real language. Perl resembles PERL as much as Episode IV resembles Episode I.

PERL is dead. Long live Perl.


Day 11: List context and parentheses

It's common to start off believing that () make a list, or create list context. That's because you normally see lists first explained as constructing arrays:

my @array = (1,2,3);

and therefore it looks like the parentheses are part of list context.

They aren't. Context in this statement is determined by the assignment operator. All the parentheses are doing is grouping up those elements, making sure that all the , operators are evaluated before the = is.

There is exactly one place in the whole of Perl where this common misconception is actually true.

LHS of =

On the left of an assignment, parentheses create list context. This is how the Saturn operator works.

$x = () = /regex/g;
#   |______________|

The marked section is an empty list on the left-hand side of an assignment operator: the global match operation is therefore in list context.

LHS of x

This is a strange one. The parentheses do construct a list, but the stuff inside the parentheses does not gain list context.

my @array = (CONSTANT) x $n;

In this case, CONSTANT - presumably sub CONSTANT {...} - is in list context; x gains list context from the =, and CONSTANT inherits it.

my $str = (CONSTANT) x $n;

Here we have x in scalar context because of $str, and CONSTANT in scalar context because of that. This is not really a whole lot of use, however.

Various Contexts

This sub reports whether it's called in scalar, list or void context1:

sub sayctx { say qw(scalar list void)[wantarray // 2] }

Now we can test a few constructs for context:

# void

# scalar
scalar sayctx;

# scalar
my $x = sayctx;

# list
my @x = sayctx;

# list
() = (sayctx) x 1;

# scalar
my $x = (sayctx) x 1;

# list
last for sayctx;

# scalar
while (sayctx) { last }

# scalar
1 if sayctx;

# scalar, void
sayctx if sayctx;

# scalar, scalar
sayctx > sayctx;

1 Understanding it is left as an exercise to the reader.


Day 10: Fixes to DBIx::Class::InflateColumn::Boolean

I'm finding my new position at OpusVL ever more valuable. We like to put extra time into getting to the bottom of an issue because we rely so heavily on open-source software. Problems we discover in the modules we use are worth investigating for their own sake, simply because the amount of time already put into the modules by other people is years; years we didn't have to spend ourselves.

Today I discovered that, if I ran my Catalyst application under perl -d, it didn't actually run at all.

After much involvement from various IRC channels I came to the conclusion that the problem was in Contextual::Return; or rather, the problem was in the 5.14 debugger, since it seems OK in 5.20.

Anyway, Contextual::Return was employed by DBIx::Class::InflateColumn::Boolean, which I was using because SQLite doesn't have ALTER COLUMN. We test components of Catalyst applications as small PSGI applications with SQLite databases backing them, which has its own problems, but in this case the issue was the column in question being closed boolean NOT NULL DEFAULT false, and SQLite not translating "false" as anything other than the string "false", and then shoving it in a boolean column anyway.

So DBIC faithfully gave me "false" back when I accessed the row, and "false" is true, so everything broke.

So I inflated the column.

This all resulted in a patch to DBIC:IC:Boolean, authored by haarg, removing the dependency on Contextual::Return entirely.

This may be a case of avoiding rather than fixing the problem, but since the problem appears to exist in the 5.14 debugger, the only way to fix that is to update to 5.20 - or whenever it was that it was fixed.

It also prompted me to rebuild the SQLite database to remove that default. Turns out DBIC doesn't fill in default values when creating rows.


Day 9: Scalar filehandles, or IO, IO, it's not to disk we go

Did you know you can open a variable as a file handle?

This is a great trick that avoids temporary files. You can write to the filehandle, and the stuff written thereto are available in the other variable. I'm going to call the other variable the "buffer"; this is a common term for a-place-where-data-get-stuffed.

Here's an example whereby I created an XLS spreadsheet entirely in memory and uploaded it using WWW::Mechanize. The template for the spreadsheet came from __DATA__, the special filehandle that reads stuff from the end of the script.

This allowed me to embed a simple CSV in my script, amend it slightly, and then upload it as an XLS, meaning I never had to have a binary XLS file committed to git, nor even written temporarily to disk.

In the example below, a vehicle, identified by its VRM (registration plate) is uploaded in an XLS spreadsheet with information about its sale. The $mech in the example is ready on the form where this file is uploaded.

The main problem this solves is that the VRM to put into the spreadsheet is generated by the script itself, meaning that we can't just have an XLS file waiting around to be uploaded. As noted, it is also preferable not to have to edit an XLS file for any reason, essentially because this can't be done on the command line - LibreOffice is required, or some Perl hijinks.

open my $spreadsheet_fh, ">", \my $spreadsheet_buf;       # [1]
my ($header, $line) = map { chomp; [split /,/] } <DATA>;  # [2]
my $xls = Spreadsheet::WriteExcel->new($spreadsheet_fh);  # [3]
my $sheet = $xls->add_worksheet();

# processing

$line->[0] = $vrm;

$sheet->write_col('A1', [ $header, $line ]);              # [4]

  with_fields => {
      file => [ [ undef, 'whatever', 
          Content => $spreadsheet_buf ],                  # [5]
      1 ]
  button => 'submit',

# [5]
VRM,Price,Fees,Collection,Valeting,Prep costs

The key to this example is in [1], which looks like a normal open call except for the last expression:

\my $spreadsheet_buf;

This is a valid shortcut to declaring the $spreadsheet_buf and then taking a reference to that:

my $spreadsheet_buf;
open my $spreadsheet_fh, ">", \$spreadsheet_buf;

The clever part is that now, $spreadsheet_fh is a normal filehandle that can be used just like any other; just as if we'd used a filename instead of a scalar reference. At [3] you can see a normal Spreadsheet::WriteExcel constructor, taking a filehandle as the argument, as documented.

At [2] you can see DATA in use, which reads from __DATA__ at [5]. This also acts like a normal filehandle; <DATA> reads linewise, and we have to chomp to remove the newlines.

We map over these lines, chomping them and using split /,/ to turn them into lists of strings; and this list is inside the arrayref constructor [...], meaning we get an arrayref for each line.

At [4] we have processed sufficiently to have installed the VRM in the gap at the front of the second line, i.e. the zeroth element of $line, so write_col is employed to write both arrayrefs as rows (yes I know) into the spreadsheet.

When we call $xls->close, this writes the spreadsheet to the filehandle. But no file is created; instead, the data go to $spreadsheet_buf. If we were to print $spreadsheet_buf to a file now, we would get an XLS we can open.

Instead, at [5], we use the trick documented in submit_form (ether++ for reading everyone's mind) to use the file data we already have as the value of the form field.

This trick is remarkably useful. You can reopen STDOUT to write to your buffer:

    local *STDOUT;

    open STDOUT, ">", \my $buffer;



but that's better written

my ($buffer) = capture { do_stuff_that_prints() };

from Capture::Tiny.

See also

If you use IO::Handle then your $spreadsheet_fh will be an object like any other - but these days, you get that simply by using lexical filehandles anyway.

IO::Scalar seems like a decent OO-type module to deal with this but also look nice.

IO::String also works with strings-as-IO.

I've not tried either of these latter two, but YMMV etc.


Day 8: Mindset

It doesn't matter what language you start in. The language doesn't help. The problem is you; you're the new developer, the inexperienced green sapling; you're the one with no instinct, no sense of smell, and no idea where to begin. You probably don't even have a problem you want solving.

Whenever we solve a problem we draw on our knowledge and experience to solve it. Knowledge and experience differ like theory and practice do. Knowledge is the theory. You can know something because you were told it, and it stuck. Arguably, the best way to know something is to understand it; then you know why it is the case, and what you really know is more general, more applicable, and hence more useful. Experience is practice; you've done this before. Experience is the sort of knowledge you need in order to produce a good solution to a problem, because experience tells you what the next problem is, and how to avoid it now.

Experience alters your thought process.

Today's example comes from, where we see a green programmer trying to solve a problem:

Report the powers of two that sum to produce a given integer

That is, break down an integer into the powers of two from which it is composed.

Scroll no further if you wish to solve it yourself. In Perl.

No language can provide you, up front, with the knowledge you need to answer this question. Most languages have for loops and while loops, and something that can raise 2 to a power. But that's all you know. You have a few bits of theory, but no experience to draw upon. So your thought process goes something like this:

  • I can take a number n and find the nth power of two 2 ** $n
  • I can store a value and compare it to my target num $total > $num
  • I can loop an indefinite number of times with while
  • The biggest power of two less than num is definitely part of it

You reach the conclusion, using knowledge, that you can subtract ever-decreasing numbers from your target, in a loop. Any number that leaves you with a positive number simply means you can repeat the process with the new number, having remembered that particular power of two.

use 5.010;
use strict;
use warnings;

my $num = shift;
my $power = 0;

$power++ until 2 ** $power > $num;

while ( $power ) {
  if ($num - (2**$power) >= 0) {
    say "$power (" . (2**$power) . ")";
    $num -= 2 ** $power;

4 (16)
2 (4)

Reasonable. Now here's my thought process:

  • They want all powers of two that come together to sum a number
  • That's how binary works
  • We can ask the binary representation of num for all the on bits
  • The positions of those on-bits are the answer.

So we write that.

say for grep { $_ } map { 2 ** $i++ * $_ } reverse split //, sprintf "%b", shift

This is a one-liner. Try it in perl -E'...' 20, in place of the ....


OK we'll break it down, but you'll see that each section maps roughly to each of the items in that list.

"They want all powers of two"

The answer is going to be a list. say for LIST, and we have to construct LIST. The powers of two have a test for validity, so there's probably a grep. say for grep { CONDITION } LIST.

We should really build an array for LIST, and use it at the end.

use 5.010;

my @bits;

say for @bits;

"That's how binary works"

Getting the binary representation of a number is easy; sprintf "%b", EXPR. In the one-liner we used shift to take the first command-line argument. We can put $num here and save the result of sprintf instead of using it directly.

my $num = shift;
my $binary = sprintf "%b", $num;

"We can ask the binary representation for all the on bits"

How? This is a two-parter. First you have to turn the string into bits. Then you have to find the on-bits.

Turning the string into bits is easy - you split it on the gap between characters:

my @bits = split //, $binary;

Not obvious is the finding the on-bits. See, we don't want the actual bits themselves; all the on-bits are 1, so finding them all would simply tell us how many there are. We actually want to know where they are.

Trouble is, sprintf gives us 10100 for 20. The first bit is the high bit, but that has the smallest offset, i.e. it's the 0th digit in that string. And the other 1 is the 2th digit. Knowledge tells us that our 20 working example should report 4 and 16; but 2 ** 0 is neither of those, even though 2 ** 2 is.

The answer to this is actually in the original solution: we have to work backwards, biggest number last. That's why we reverse it.

my @bits = reverse split //, $binary;

"The positions of those on-bits are the answer"

In the final solution I report the powers of two, not the numbers we raise two to, and the positions are the numbers to raise two to, not the power of two to that. Clear?

The positions of the on-bits are found using a bit of a naughty map, which uses a counter outside its scope. map should really not have side-effects. We can work around this in a proper script, however.

By iterating through the bits and incrementing a counter as we go, we can determine the value that this bit represents.

2 ** $i++


of course returns the value of $ibefore incrementing it, meaning it starts off undefined. We can't have that.

my $i = 0;

Now we can produce a list of all those values:

map { 2 ** $i++ } @bits;

Plug this into say for debugging purposes:

say for map { 2 ** $i++ } @bits;

We've lost information - what happened to the fact some of the bits were turned off? Although I had this in knowledge, it was experience that reminded me that I can multiply:

map { 2 ** $i++ * $_ } @bits;

That's better - we also should always use $_ in a map because map is supposed to transform $_.


Now we have something we can grep: $_ itself!

my @powers = map { 2 ** $i++ * $_ } @bits;
say for grep { $_ } @powers;

This collects all powers, but only reports those with a nonzero value.

We can fix the $i situation by using keys on @bits. keys on an array returns the list of indices, even though they're not really keys.

map { 2 ** $_ * $bits[$_] } keys @bits

This uses $_ in place of $i (0 to 4), but now that $_ is the index, we have to get the actual bit value by looking it up in @bits.

Answers on a postcard, please

Here's the final script, then

use 5.010;
  use strict;
  use warnings;

  my $num = shift;
  my $binary = sprintf "%b", $num;
  my @bits = reverse split //, $binary;

  my @powers = map { 2 ** $_ * $bits[$_] } keys @bits;

  say for grep { $_ } @powers;


Day 4: RFPR for

I've embarked on a new term, RPFR. An RFPR is a Request For Pull Requests: like an RFC, except for when you've already started writing code and you want people to add features or fix it, instead of bikeshedding about the spec for it.

This first one is for my daemonize script at This script is a wrapper around Daemon::Control (, which I wrote essentially so I could type

daemonize starman --something --etc webapp.psgi

... and end up with an LSB script in init, because all the default answers to the questions were right.

Unfortunately the very first time I tried to use this somewhere else I discovered that it wasn't so straightforward, so now I'd like to collect either patches or issues on the repository for features or changes that would make this script that much more useful.

Essentially the goal is to automate as much of writing the Daemon::Control script as possible, and also to have an option to write it out as an init script instead of a Perl script.

Welp, just a brief one for day 4. They can't all be deep essays on the holistic nature of abstract data.


Day 3: Different shapes of data

One of the main points of suffrance for PHP is the conflation of what the rest of the world consider to be separate data structures: the array and the hash/dictionary/map/object/etc. Everyone agrees on the name of the array; less so on the name of the hash. We'll stick with hash (but later I'll say object, just to troll you).

This conflation is vehemently defended by PHP programmers, but I sense a certain cart-before-the-horse expectation if you try to get a PHP programmer to realise the problem with it. Which is to say, a PHP programmer has only seen PHP do it, and has seen how PHP works around the limitations of doing it, and therefore doesn't have the experience of languages with separate types to be able to understand intuitively that they are fundamentally different.

I'm not going to directly attack the fact it clearly has limitations, because this is acknowledged and understood; and everything has limitations. If we didn't have limitations, we wouldn't really have things at all, would we?

It is not the limitations of the aforementioned conflation that make it a problem; it is a deeper-seated, fundamental difference; logical in nature. Almost mathematically different, like numbers and vectors are.

I'm going to try to formalise the difference. Properly explain it, and make it plain.

We can start to understand the difference by scrutinising those very workarounds that PHP does use - to cope with the limitations - and the inconsistencies that we expect from any PHP anything at all ever.

Consider the array_merge function:

If the input arrays have the same string keys, then the later value for that key will overwrite the previous one. If, however, the arrays contain numeric keys, the later value will not overwrite the original value, but will be appended.


Values in the input array with numeric keys will be renumbered with incrementing keys starting from zero in the result array.1


It is being recognised that the structure is performing two functions; the first, with string keys, has unique properties. The same value cannot be repeated in the structure, because the identifying property of that piece of information is its string name: if the array were to have two keys of the same name, it would be impossible to distinguish between them on access. We can give this concept formal terminology: it doesn't make sense.

We say it does not make sense to have two keys with the same name. Looking at this under a semantic microscope we come to the realisation that we've accidentally used two different words for the same thing: "key" and "name". The key does not have a name; the key is a name. We can't restructure that sentence to avoid using both words, because whenever we try the thing we end up with doesn't make sense. We're forced to conclude that the reason we can't make the sentence make sense is that the concept we're trying to express cannot be formally expressed. Something that cannot be formally expressed can only be described as wrong, or nonsense, or such other dismissive words. The concept does not exist to be expressed.

The second concession this array_merge makes is that numeric keys are normally sequential. This, at first glance, appears to point to another uniqueness of key; two keys in an ordinal array will never be the same, for the exact same reason: the key is the key, and any access of that key will inevitably refer to the value associated with it.

Why, then, this acknowledgement that numeric keys are expected to be sequential? That is, why, if merging two arrays with numeric keys, do we concatenate, instead of overwrite?

This question starts to show the fundamental difference between the data structures. The principle is that of purpose.

Shape of a hash

String names are often called properties. This is because they:

  • Tend to refer to a real-world attribute of a real-world concept, such as a person's name or an item's weight.
  • Don't make sense independently of the item. A person's name isn't a person's name if the person isn't involved. "Name" is meaningless if you don't know what it's the name of.
  • Together, as a collection, sufficiently define the object being described.

Last things last, because that's important. All the properties of an object together define sufficient information about the object to perform all necessary tasks with that object, within the system. I'm saying object because that's a word we use both in the real world and in programming. An object in an object-oriented system has properties, or attributes. And observe that it is the set of attributes, not their names, that define the data structure.

A hash, or associative array, or whatever, is defining a single thing. The keys of this hash are the properties that are required to capture the important information about that item, just as the properties of an object are.

We will call the set of keys, or properties, that the hash has its shape. We can consider that formal terminology as well2.

Shapes of arrays

It is not infeasible that an object can have a numerical property. This is often proscribed by programming languages, who won't let you start property names with numerical values when defining classes, but we're talking about hashes here. They can take any string value and use it as a property for this object.

For example, perhaps this object's keys are all identifiers into other things, and all values are boolean. It's an object representing associations between other things. A node on a graph, perhaps, storing other nodes' identifiers as keys, and boolean values determining whether there's a link to it.

A stretch, but not totally crap.

What of the ordinal array then? This is just it: the index you use to access an item in an array is not a property of the array.

We can actually see this best in a Java scenario: in Java, an array is an object that contains other objects. But the array has properties of its own; a length, a max length, a stored data type. It has functions that can be run on it: push, pop, splice, etc. It does not have a property called 0, a property called 1, etc. It is a completely different thing.

In C++ the same structure (an array with flexible size) is called a Vector. This is apt. Arrays are vector structures. The thing that PHP calls a "key" is actually an index; I already used the word, and so does PHP, interchangeably. But it is not a key! A key is a property of the data structure; an index is a position in the data structure, not a property of the data structure.

The array is a line; a mathematical, one-dimensional structure. At integer points along its length can be found data of arbitrary type. But these are not properties of the array, any more than the values described by a line on a graph are properties of the line. The fact these things are in order - 0, 1, 2, 3 - is a phenomenon that follows on from the fact we're sticking more things onto the end. The ordering of the items in the array is not defined by the indicies; the indices are defined by the ordering. The data in the array defines the shape of the array.

The hash is a bag; a lookup table. There is no graph that can describe a hash, because there is no natural ordering to the keys in it. Strings don't have natural ordering: "a" is only before "b" because we invented "a" and "b" and put them in that order. We didn't invent 1 or 2 and we didn't make 2 bigger than 1.3 Is your name before or after your height? That doesn't make sense!

The fundamental difference is there, then. The keys to an array are defined by the data in it, but the keys to a hash define the data that goes in it.

1 A salient question at this point is how do you know whether it is a string or not?. Is "0010" a string? If not, is it the number 10 or the number 2 or the number 8? All four things are valid interpretations under commonly-used rules.

2 As with all language, it doesn't matter what noises or letter-strings we use to define a concept. The important thing is that we all understand the same thing when we hear or see it. Let this word stand for the scope of this post; but you'll likely see the term "the shape of the data" referred to quite a lot in general.

3 We invented the symbols 1 and 2, but we didn't invent the platonic integers that 1 and 2 refer to. There was 1 earth before we evolved on it and used the symbol 1 to represent this number.

Day 2: Opt::Imistic

Can't believe I've not made a post about this ancient module. Opt::Imistic is a module I wrote to facilitate the writing of command-line scripts that take options. It was inspired by the node module of the same(ish) name, Optimist (now deprecated).

All Opt::Imistic does is to parse @ARGV for things that look like options (using essentially the same rules as Getopt::Long does with gnu_compat options, i.e. the sensible way of doing it that doesn't cause too much ambiguity.

Long and short options are recognised by default, given GNU style. -xyz is three options and --xyz is one. Use whitespace or = to specify values to options. = can be used if the value looks like an option1.

As the docs say, this is a 90% module - Getopt::Long is for the other 90%.

Hacky magic

Opt::Imistic relies on a piece of Perl magic the reader may not be aware of, which is that, for all of Perl's global variables, it appears to be the entire typeglob by that name that is global.

Simply put, this means that, because @ARGV exists, so does %ARGV. This is exploited by Opt::Imistic, by putting discovered arguments as the keys to the associated values, if any.

Overload magic

tm604 on IRC suggested that I can be even more magical if the discovered options were actually objects of a class that behaves correctly in different situations.

Since you can't prevent a person from multiply specifying a single-use option, instead of bailing horribly in this situation it's traditional to simply take the last instance of it. This implies the option needs a value; otherwise, it doesn't matter how many times you specify it. Think --config, for example.

Indeed, if the option doesn't take a value, it's usually expected that the script is going to count the number of times it's specified. Think -v, often "verbose", or -vvv, "extremely verbose".

Perl being Perl, the user doesn't have to care whether it was specified once or many times, if all the script cares about is whether it was specified at all. Zero is the false value here.

With a simple class2, entirely designed to carry overload magic, we can gather all this information at once.

package Opt::Imistic::Option {
    use overload
        '""' => sub { $_[0]->[-1] },
        'bool' => sub { 1 }

This covers the common uses of command-line options:

  • One or more values - The objects are blessed array refs. Simply deref it for your values.
  • One value - Treat it as a string, and it'll stringify. This also works for numbers. The overload ensures the last value is taken; all options are arrayrefs with at least one thing in them, or absent entirely.
  • A countable option - Simply count your arrayref.
  • A boolean option - Just use it in boolean context. You'll get a 1 if it's there.

Again, this is a 90% solution, but check the docs for the extra functionality I added. You can specify options are required, and specify that at least n arguments must be left on @ARGV at the end of parsing.

1 I'm not sure whether I just came up with this or not. This might not (yet) be true.

2 This package uses the package BLOCK syntax, introduced in 5.14. The module doesn't specify 5.14; this is an oversight.


Day 1: Pod::Cats

Today is the first day of the advent calendar blog thing, so I thought I'd give it a whirl. Let's see how far I get.

I thought I'd do an easy one and put it out there how I actually do my blog. Well, I don't like writing HTML, and I don't like WYSIWYG editors, but I wanted something easy like blogger to actually do all the hard work for me.

I don't really like Markdown, primarily because it doesn't let me do certain things easily1. Footnotes are something I do commonly when I'm writing2; they allow a certain second dimension to what would otherwise be a one-dimensional stream of words. In fact it's sort of a hyperlink, from before we had hypermedia.

You'll note, indeed, that my footnotes are hyperlinks. They link to their location on the page; and the footnotes at the bottom of the page link back to their marks. This is the sort of functionality I wanted from a blog markup language.

I decided that POD has a good balance of DWIM3 and expressiveness, so I took the concepts and generalised them.

This led to Pod::Cats being written. It really needs to be rewritten, now that it's something I actually use regularly. It's not my best code.

The name Pod::Cats came from a conversation I had quite some time ago in the #perl-cats channel on Freenode, wherein we thought it would be neat to have a community blog/podcast site called Podcats: the whole discussion started because someone typoed podcast.

Anyway, the module defines the grammar of Pod::Cats documents, but is intended to be extended to provide functionality. PodCats::Parser does just that. This module could also do with a refactor.

The Pod::Cats parser uses a subclass of String::Tagged::HTML (here) whose entire purpose is to just render when stringified. In fact the main module may do this now - I should check!

Bugs exist in String::Tagged::HTML whereby, because there is no inherent ordering to tags in the same place in the string, the order of render is at the mercy of Perl's hashing algorithm. LeoNerd is pawing at a solution to this, so with luck this will solve my footnote issues soon. I've been helping with moral support and distractions.

Anyway, I save my files with the .pc extension and use a reasonably consistent set of Pod::Cats commands to mark up my blog posts. The idea is to maintain semantic structure while minimising the amount of actual meta-stuff in the file itself: something I felt POD was good at, with a few amendments of my own.

Once done I simply run my script, which overwrites or creates the HTML for any .pc file with a later save date than the equivalent HTML, or missing HTML. Then I upload the HTML. This means I can fudge the HTML afterwards without worrying about it being overwritten the next time I run the script.


Currently I have no way of supporting images. I did try to; I looked into how Google uploads the images to Blogger. But there's no easy way of automating this, and I really couldn't be bothered working it out the hard way, so, currently, images are inserted in post-processing.

External images are supported with the =img command with the URL, however.


What follows is the entire .pc file for this post up to the end of this paragraph, so you can have a taste of what it looks like4 6

Today is the first day of the advent calendar blog thing, so I thought I'd give it a whirl. Let's see how far I get.

I thought I'd do an easy one and put it out there how I actually do my blog.  Well, I don't like writing HTML, and I don't like WYSIWYG editors, but I wanted something easy like blogger to actually do all the hard work for me.

I don't really like L<|Markdown>, primarily because it doesn't let me do certain things easilyF<1>. Footnotes are something I do commonly when I'm writingF<2>; they allow a certain second dimension to what would otherwise be a one-dimensional stream of words. In fact it's sort of a hyperlink, from before we had hypermedia.

You'll note, indeed, that my footnotes are hyperlinks. They link to their location on the page; and the footnotes at the bottom of the page link back to their marks. This is the sort of functionality I wanted from a blog markup language.

I decided that L<|POD> has a good balance of DWIMF<3> and expressiveness, so I took the concepts and generalised them.

This led to L<|Pod::Cats> being written. It really needs to be rewritten, now that it's something I actually use regularly.  It's not my best code.

The name Pod::Cats came from a conversation I had quite some time ago in the #perl-cats channel on Freenode, wherein we thought it would be neat to have a community blog/podcast site called Podcats: the whole discussion started because someone typoed podcast.

Anyway, the module defines the grammar of Pod::Cats documents, but is intended to be extended to provide functionality.  L<|PodCats::Parser> does just that. This module could also do with a refactor.

The Pod::Cats parser uses a subclass of L<|String::Tagged::HTML> (L<|here>) whose entire purpose is to just render when stringified. In fact the main module may do this now - I should check!

Bugs exist in String::Tagged::HTML whereby, because there is no inherent ordering to tags in the same place in the string, the order of render is at the mercy of Perl's hashing algorithm. LeoNerd is pawing at a solution to this, so with luck this will solve my footnote issues soon. I've been helping with moral support and distractions.

Anyway, I save my files with the .pc extension and use a reasonably consistent set of Pod::Cats commands to mark up my blog posts. The idea is to maintain semantic structure while minimising the amount of actual meta-stuff in the file itself: something I felt POD was good at, with a few amendments of my own.

Once done I simply run my L<|script>, which overwrites or creates the HTML for any .pc file with a later save date than the equivalent HTML, or missing HTML. Then I upload the HTML. This means I can fudge the HTML afterwards without worrying about it being overwritten the next time I run the script.

=h2 Images

Currently I have no way of supporting images. I did try to; I looked into how Google uploads the images to Blogger. But there's no easy way of automating this, and I really couldn't be bothered working it out the hard way, so, currently, images are inserted in post-processing.

External images are supported with the C<=img> command with the URL, however.

=h2 Sauce

What follows is the entire .pc file for this post up to the end of this paragraph, so you can have a taste of what it looks likeF<4> F<6>

=footnote 1 Like this

=footnote 2 Because I have a lot to say and I don't want to interrupt the flow of the sentence

=footnote 3 Do What I Mean

=footnote 4 I've artificially promoted the footnotes to this point, since they need to be the last thing in the file to render properly. This is something I need to fix; footnotes should be stored and rendered at the end irrespective of where they turn upF<5>.

=footnote 5 In fact an auto-numbering system came and went and shall come back again at some point.

=footnote 6 Also available L<|here>

1 Like this

2 Because I have a lot to say and I don't want to interrupt the flow of the sentence

3 Do What I Mean

4 I've artificially promoted the footnotes to this point, since they need to be the last thing in the file to render properly. This is something I need to fix; footnotes should be stored and rendered at the end irrespective of where they turn up5.

5 In fact an auto-numbering system came and went and shall come back again at some point.

6 Also available here


What's wrong with JavaScript in the template?

Those of you keeping score will know that I recently started a new job. This one is Perl, not PHP, and so a certain level of standards is expected from the code. What with Perl having all these neato features and excellent web frameworks, I at least consider it on a par with Python and Ruby in its utility.

Perusing the new-to-me codebase I of course discover some of the hysterical raisins that live there, much of which is easily forgiven because the original coder had the foresight to apologise in a comment for doing it in the first place. But one thing stood out to me as a prime candidate for refactoring: JavaScript in the templates.

I said as much and was surprised to be posed the question, "What's wrong with JavaScript in the templates?"

Surprised not to be asked the question, but because I didn't know what the answer was. I've worked enough on the front end of previous jobs to have enough experience in the matter that seeing JS in template code makes me flinch, but never have I been asked to actually introspect this reaction and explain it.

Questions like that are primo blog post material, and it's been a while since I properly got my teeth into one, so on my journey home I put my mind to formalising quite what it was about it that made me want to rip it out and refactor the life out of it.

What it's not

Some obvious answers come to mind, with varying validity.

  • Is it because it's hard to find? No. Everything's hard to find. ack for it - you'll find it soon enough.
  • Is it because it violates separation of concerns? No. In fact, you could argue that it improves it, by encapsulating JavaScript only useful to a template inside that very template.
  • Is it because the only reason most people put JS in a template is so they can use the templating language to build JS? Well yes, but that's just the same question. What's wrong with it?
  • Is it because it's not reusable? Well, yes and no. Most template JS is not intended to be reusable; it's quite specific to that particular template, and there's little use for it elsewhere. More on this point later.
  • Is it the same reason we don't put CSS in the template either? Or inline in the HTML? Yes! By Jupiter, yes! We find the answer in the template itself. It's the other, main part of the template that we've not mentioned yet - the HTML.

What lies beneath

To answer the question, we must deconstruct the web page itself and look at the parts. What are we really looking at when we look at a web page? What are we really providing when we build a template? What is the purpose of the HTML, the TT2 or Jade or Mustache code that wraps or creates it?

Most web pages follow a similar structure: There's the <html> with its <head> and <body>; the body has a <div class="header"> or, better yet, a <header>, and some sort of <div id="content">. Then at last there's a bunch of stuff that finally gets to the point, i.e. displays whatever it is the page is displaying.

Most template structures separate all the pre/postamble from the content itself. Even in the CGI days we, naively but with good intent, would have a header.html and a footer.html and we would render the header, then the body, then the footer, to STDOUT. More recently, we have a single file with the pre- and postamble in it, and we import the rendered content into that. We tend to also have a considerable number of satellite template files representing handy widgets and reusable code and all the other things that I've alredy said aren't really the reason why we don't do the title of this article.

We knew then, as we know now, something we always forget to talk about; something implicit in everything we do here. While we make all these templates rendering data in consistent ways we somehow lose sight of the simplest of notions: we are representing resources.

Resource and Framing

"Resource" is a fully-functional word, writ deep into the very clay with which we make our internets; vis-a-vis HTTP. HTTP works with a verb and a noun, i.e. it says "Do this to this". "Framing" is a word I've picked to describe what it is we website-makers do to resources to make them look nice for people using browsers that conform to the standards set out to allow us to do so.

HTTP's nouns are URIs. URI means Uniform Resource Identifier. The R in URI (or URL or IRI) means resource. It means thing; it's identifying the nouns of the internet. We respond to a (request to a) URI with a resource, represented in HTML format for the purposes of this discussion. We know this, but we never say this - and so whenever we get discussions, no one ever uses it as a basis for finding answers. But the concept of resource contains the answer to our question.

When we divide our templates up into separate files there is the tacit goal that the template we use to represent the actual, specific resource contain as little HTML as possible. Why? Well, mostly for consistency. We want to frame all our resources - at least those related to each other - in the same way. That means that if we put as little HTML as we can get away with into our resource templates, we can put as much as we can get away with into our framing templates, and thus have as little variation between the rendered resources as we can. A side effect, and therefore a second benefit, is that if we want to reuse or amend our framing, we can do this in one place - it's DRY.

We already recognise the difference between frame and resource: it's encoded right there in <div id="content">. How many of your templates resemble this structure?

  <div id="content">
    <% content %>
  <more stuff></more stuff>

That right there is the boundary between Alliance and Reaver space. Uh, I mean, the place where the framing goes away and the resource begins. The resource is all the data that change when you ask for a different ID, or a different resource type. The resource is that which, if you took all the HTML away, would still be what you asked for.

I've nearly made my point

Not all resources are data. Some resources are forms. I'm choosing forms as an example for another resource type because we're all familiar with them doing stuff.

Forms contain no data, but instead prompt you for data, and allow you to create more resources. Nominally, they represent the structure of the resource type, but don't represent any particular record of that type. The form holds the key to the answer: behaviour.


<form action="/upload_image" method="post" enctype="multipart/form-data">
  <label for="image">Upload image:
    <input name="image" type="file">

  <input type="submit">

This is a form with a file control, as you well know. It renders as a box with a "Browse" button. This one renders with a label, "Upload image:".

If you click on the label, the text of the input, or the browse button, you get the same behaviour: a file browser pops up. When you select a file and confirm it, the name of the file appears in the text part of the input, unless some jackass has installed Uploadify or similar, and broken it.

It also renders a single submit button. The button looks like all the other buttons on your website because you don't put CSS in your templates. The reason for that is being explained as we speak. I mean, as you read. I mean now.

When you click the submit button, the browser composes an HTTP POST request to the URL /upload_image on the host that served this resource. This request contains the entirety of the selected file, encoded in such a way that the receiving server can understand it. Presumably, the resource at that URL knows what to do with it.

Now, kindly point out to me the part of the HTML snippet above that implements any of that behaviour.

It's not there.

Nouns and adjectives - that's what the HTML is made of. There is not a single verb in the entirety of that form, and yet those few lines perform, implicitly, functionality that you would probably have to look up on Wikipedia to implement yourself.

Not all resources are forms, either. Here's a video resource, shamelessly stolen from Wikipedia, and represented in HTML format:

<video src="/movie.webm" poster="/movie.jpg" controls> </video>

Here's a more familiar one:

<img src="/images/avatar.png" alt="avatar" title="Get your pointer off my face">

Noun-adjective-adjective-adjective. Noun adjective-adjective-adjective. The <video> noun:

  • Fetches the resource at '/movie.jpg' of the host that served this HTML resource, and renders it at the place in the page concordant with the styling associated with it and the rest of the HTML.
  • Puts some sort of controls on this image, probably a play button, which, when clicked, causes the resource at '/movie.webm' to be fetched.
  • Renders the fetched video file in situ, replacing the still image, and plays any sound that comes with it.
  • Renders further controls, such as a scrubber, pause, volume slider.
  • Affects the right-click menu of the browser to provide appropriate options to a video: save video, get URL, get URL at this time, etc.

Plus anything else I've forgotten. The <img> noun has similar, albeit many fewer, effects: the image is fetched and rendered without user interaction. Indeed, if the image is an animated gif, it will animate! On its own!

This borderline-facetious set of examples serves to point out that the browser has already got verbs. The nouns (HTML elements) say which verbs you want to use (and where to put the visuals for the user's interaction), and the adjectives (the attributes of the elements) control the parameters that the verbs need. (Fetch which video? Play automatically?)

This is called semantics.


I'm going to define semantics as the use of nouns to imply verbs1. Form fields come with behaviour, and you say which behaviour you want through nouns, i.e. the choice of which input you use. Semantics also covers those adjectives that fine-tune the noun's behaviour by describing it further.

Semantics tell things how to behave based on what the resource contains. An HTML resource often contains framing. Semantics go into the HTML to tell anyone who cares which bit they can ignore. Semantics is the way you phrase things; it's how you describe the resource.


<div id="content">

A web scraper can use this sort of thing to know what to ignore. Ignore is a verb. The HTML doesn't say "ignore this"; that's for the client to decide.

The browser isn't going to ignore it - but the browser doesn't care about this particular piece of semantics2. If the CSS says to do something to it then the browser will do that to it, but the browser doesn't do that by default.

The web scraper will skip anything outside this div - provided it knows what the 'content' ID means - and the browser will do nothing based on this ID because it hasn't been told to.

That right there is the answer. There is a difference between all the things it is possible for a browser to do and all the things the browser can already do. You can stick together awesome websites entirely using HTML5 and CSS3, but often you want behaviour that is not already built-in to the browser. Maybe you want div#content to have special styling or behaviour, but browsers don't come with that built-in.

And indeed, styling is just a form of behaviour - CSS tells the browser how to behave when it renders certain elements in certain configurations. JavaScript tells the browser how to behave when the user does things.

This is the point where people start putting JavaScript into templates. A specific form needs special behaviour, so you add a <script> tag and then output the form.

Smash! go the semantics. Fie! cry the tortured frontenders.

None of the behaviour you ever write is useful only once. I told you I'd get back to the reusability point. The JavaScript doesn't go in the template because it's not reusable, sure, but why is that a problem?

The problem is the JavaScript defines verbs. Semantic HTML is that HTML which uses only nouns, and lets the browser select the correct verbs.

JavaScript, therefore, is correctly a separate resource that adds verbs to the browser, and defines the nouns to which they apply. That's why everything eventually ends up as a JavaScript plugin; and sometimes as core browser behaviour.

Essentially, we're saying that JavaScript is a CSS file that defines behaviour, not styling. Where CSS tells the browser how to interpret the semantics of your HTML in terms of colouring, positioning and so on, JavaScript tells the browser how to interpret the semantics in terms of direct functionality - behaviour.

Indeed, not only should JavaScript never go into the template, it should never go into <script> tags either. Just like CSS should never go into <style> tags.

The Related Resource

Resources have related resources. If you strip out all the framing of your HTML resource (e.g. you render it as JSON instead) you are still going to keep many of the hyperlinks - the contents of any <a> tag inside the content div, perhaps some of the image sources. That's because the HTML framing is just rendering the content in a human-readable way3. The relations between resources are actually part of the resource itself, or at least metadata to it.

This is important because it addresses one of the main reasons people put JavaScript in templates: so that they can use the template language on the JavaScript, and thus build resource-specific JS that renders, e.g., a list of related resources when you click some "See related" button.

If the resources are related they should already be in the page. I seriously cannot stress that enough. Either the related resources are, or are not, relevant to this representation of the resource.

If the HTML went away and you were returning JSON, would you, or would you not, list those related resources as metadata, one way or another?

They cannot be part of the framing: the framing is consistent across the whole site! They are unique to this resource; and the style of list that is invisible until a button is pressed is unique to this type of resource.

But is "style of list" not an adjective about this list? Is list not a noun? Cannot you use the noun-adjective semantics to say, "This is a list of related resources, and it is of type pop-up-on-button"? HTML is amply equipped to represent this semantically: we even have the rel attribute to let you specify which button should activate the list.

Related resources belong in the page. Either as a hyperlink, or directly in the HTML. If you want to save bandwidth, you don't put the whole list in, but you put in a hyperlink placeholder instead. The important thing is that the HTML is accurately representing the resource. Just like the JSON would. Don't force non-browser consumers of your HTML resource to figure out how to run the JavaScript just to get related data.



is Chosen. You've probably seen it before. You start typing in a form field, and it lists all matching options, filtering as you type.

Chosen can either use an existing set of options, such as from a select box, or a URL from which to fetch options that match the string.

Both of these can be in the HTML before the JS even runs. The list of options is a related resource; it is simply represented in different ways. The first way puts all of the related resources in with the main resource; the second way puts a hyperlink to a single other related resource, from which they can be fetched when it's appropriate to do so.

At no time is it necessary to put this data into the JavaScript. JavaScript can read. Hell, the JavaScript should work on the JSON representation and all you'd have to change would be how it finds the data.

The Answer

The answer, then, is semantics. Of course it is. But it's what semantics means that turned out to be the difficult thing to define here.

Semantics is about saying what this resource is; it's metadata about the resource itself. Semantics allows the client to make the decisions about what parts of the resource are relevant and what parts are not.

It's exactly the same principle by which responsive web design works.

It's exactly the same reason you don't put inline CSS into your HTML.

It's exactly the same reason you've never written a video player, or had to decode the JPEG file format manually in JavaScript and blit the resulting bitstring onto a canvas element.

It's exactly the same reason you don't know how to launch a file browser dialogue box.4

It's exactly the same reason web components exist.

It's exactly the same reason JSON resources don't come with a stylesheet or JavaScript.

It's exactly the same reason we now have <nav> and <section> elements.

It's exactly the same reason we can produce screen-reader-friendly representations of HTML pages when the HTML page is correctly structured.

It's because you are describing what the resource is, and letting the client decide what it does.

*drops mic*

1 A separate discussion

2 Not all HTML is for the browser. HTML is a perfectly sensible representation format for machine use as well.

3 Perhaps better: the HTML framing is a machine-readable way of getting the browser to render the content in a human-readable way.

4 In principle. HTML5 advances in file handling mean it is more common for the file dialogue to be called directly from JS.