Back to school: Digital rights management

My final post to the discussion forum for the Whitireia Diploma in Publishing.

On Digital rights management (DRM)

DRM makes me queasy. Whichever way you cut it it’s a hard one to know really which way to go. Ultimately I feel drawn to the DRM-free side of the argument, merely for the sake of making life easy on consumers. They are after all the last people that publishers want to get off-side with. If I were a publisher I’d give DRM-free a shot and see how it went; it just seems like a suck-it-and-see kind of thing. At the very least follow some of John Noring’s suggestions to keep the DRM as light and the file as flexible for the reader as possible.

It’s a vexed question, and the hardest bit to handle is the question of who pays the author when no one buys the book? I don’t think anyone quite has an answer to that, but nor do they have any substantial facts and figures about the effect DRM-lite or DRM-free might have on sales.

But the alternative question is what’s the real threat? Sure, one person might buy a book and mention it to a friend who’ll ask to borrow it, a situation that’s no different to a print book. The fear is that ebooks are that much easier to distribute therefore the peron who buys the book won’t just lend it to a friend who who asks to borrow but will for some reason send it to all their friends just in case they all want to borrow it as well. I don’t think that’s likely.

Where the argument falls down is that the friend who borrowed the book doesn’t have to give it back under a DRM-free model. I’m not sure that’s a huge problem; if they didn’t like the book, nothing lost and nothing gained. If they did, they’ll probably be buying others by the same author without waiting to borrow copies first. Like NAP’s Jensen says, DRM gets in the way of discoverability, whether by search engines or people. Conversely, unlocking content makes it findable and turns it into its own marketing device.

It does make it sounds as though publishers are being held to ransom – don’t use DRM or the vandals at the gates will pirate your books. Create a relationship that the readers wants you to feed – subscription models for up-to-date titles, pre-releases, special deals etc.

I think there’s something in the idea of micro- or distributed patronage and I’d like to see it take off. It’s a kind of dreamy world though: Radiohead pulled it off (I forget the album — maybe not one of the better ones…), and This American Life makes regular calls for donations to support its podcasts; Kiva‘s a different take on the same idea, and Brooklyn Museum supports a fan network through donations. These are all examples where there’s a community and organisation that drives the patronage. I’m not so sure how that translates to lonely writers and their often mistrusted publishers. (And I say that with no slur intended on publishers, but more to point to a misunderstanding of the value of publishers held by many readers.)

Is that the real challenge for publishers? To build genuine communities that readers want to belong to and will feel is worth continuing to contribute to financially. Maybe at the same time break down some of the misunderstanding about the role of the publisher and the restrictions in which they operate. And further, demonstrate to the readers the collaboration that takes place between author, editor and publisher, and ultimately reader. Letting readers know how valuable their contribution is to maintaining the ecosystem might be one of the biggest sales yet.

In short, forget about DRM and think about your readers, and make them think about you.

Back to school: Territorial rights

Another post I made to the discussion forum for the Whitireia Diploma in Publishing.

On Territorial rights

Well, I’m not entirely clear what the big threat to local publishers is, nor even to the big ones. And if there is a problem, I think it could be worse for the big publishers who have come to rely on revenues streams by buying and selling territorial rights. Large publishers are already dominant – that’s been the case in New Zealand for years – but maybe the end of territorial rights breaks one of their strangleholds if it mean New Zealand publishers can go straight out to other markets. Learn from the French and Spanish publishers and retain world rights, as Edward Nawotka suggests.

There aren’t currently a lot of New Zealand books selling rights overseas and there’s a probably a useful enough model that could take the place of selling territorial rights. Maybe a shared rights on world sales? I’d be interested to know how much money we’re talking in terms of overeas rights on New Zealand books and whether that might be recouped by a share in the world rights?

New Zealand authors will still primarily sign with New Zealand publishers, and New Zealand publishers will continue to sell New Zealand books to that relatively small percentage of the local population that thinks it’s important to support New Zealand authors, books and publishers. Removing territorial rights won’t fundamentally change the precarious state of these relationships one way or the other as far as I can see.

What the smart New Zealand publisher can do though is start selling ebooks and print-on-demand ready books anywhere they can find a market. Isn’t that a good possibility? And a little bit of success for a publisher is going to attract more authors wanting to piggy-back on it.

Maybe there’s a chance overseas publishers will dump stock on New Zealand, though that happens already, and New Zealand is such a small market – that may counter any increase. Typically too, dumped books are dross and not the sort of thing that many New Zealand publishers produce. (Side-note on non-dross parallel importing: here’s a project for someone with a connect at Unity Books – ask them where their yellow circle logo comes from.)

Martin Taylor defended territorial rights on Teleread last year, but it strikes me that he’s talking about a handful of local companies buying local rights and/or handling distribution deals. I’m not convinced that really encourages a local industry, certainly not one that’s focussed on developing authors and high-quality local content. I also suspect most of these ‘local’ companies are local branches of overseas publishers.

I’m optimistic that the end (and yes, I’m assuming the end will come) of territorial rights won’t mean the end of different prices in different markets. David Grigg, writing from Australia, complained that he couldn’t buy a US ebook. That was based on IP recognition: the website he tried to buy from knew where he was and refused the sale. In the same way a website will be able to recognise users coming from developing countries and price ebooks accordingly.

The inevitability of the end of territorial rights seems to have become fact. Consumers won’t and aren’t putting up with it, and it forms part of a traditional approach to the world that no longer really works. I seriously think that local New Zealand publishers can either avoid much impact as they’re not in the habit of selling overseas rights, or can benefit from retaining world rights and selling to new online markets.

Maybe I’m being naive – very happy to be rebuked!

Back to school: Pricing ebooks

Another post I made to the discussion forum for the Whitireia Diploma in Publishing.

On Pricing ebooks

This seems like one of the trickiest bits of the emerging publishing reality, how to price ebooks, and what effect it has on the entire chain of marketing and distribution. And what effect on the print books? That may be one of the keys to the discussion – does the ebook only exist in relation to a printed equivalent and how many print equivalents are there? For publishers today, it seems the ebook doesn’t exist without a printed book, and for many there’s both a hard cover and paperback to consider.

Amazon’s dominance has confused the issue, so the emergence of an agency model is encouraging if it hands some control back to publishers. That control would allow publishers to set the price across all channels, or even only a few channels if the percentages worked out in such a way that they could ignore some. If Apple can help push an agency model, good on them; my only worry with Apple is the gate-keeper role they tend to play – various complaints about getting iPhone apps released through their approval system suggest at least a little concern is warranted.

The problem as publishers see it remains one of estimating the knock-on effect to print of selling ebooks and how to price them in comparison to the print. I miss the point on this every time so please set me straight, but why not just set the ebook price the same as the cheapest print price? While the only version out is the hardback, the ebook is that price. When the paperback’s released, the ebook becomes that price. There must be a flaw here, but I’m not seeing it.

But the concern of the effect of ebooks on print is in a sense an invented one: if you’re still selling x number of ‘books’ then where’s the issue? The real problem is the uncertainty of how many people want a print copy and how many would rather have the electronic copy. Do you print fewer at a higher unit cost but with lower warehousing costs, or should you include the number of e-copies you hope to sell in working out the unit cost over the combined print+ebook run? Again, the theory doesn’t seem complicated, just we’re dealing with a new set of variables that haven’t settled down yet.

So another thought I had was to think about something a friend said recently: she’s no longer buying a book without first reading a library copy. She’s only going to buy once she’s decided something is worth reading again and therefore worth owning. There could be something in that for publishers – a cheap or free ebook that whets the collector’s appetite for the ‘real’ thing, or the sampler that tempts people to buy the enhanced ebook, whatever that is.

Which brings us back to the first question for publishers to consider: is the ebook just another variant of a print title or is it something with no print equivalent? If the latter then pricing becomes relatively straightforward – editorial and production costs, royalties, distribution (i.e., data management) costs, etc, plus a profit on top. And publishers will need to do like the Harvard post says – make insanely great things that people want to buy and then there’s no harm in charging what it costs to produce. Make something that’s dull (like a poorly imitated book experience) and price-setting is far more at the whim of the purchaser.

Literary agents

Interesting to see a literary agent getting into the ebooks game, justified on the basis that publishers aren’t offering enough money to authors (or the agent?) and that digital rights weren’t negotiated for OOP titles. I wonder though whether the publisher should still get a cut based on the original editing, setting, marketing and popularising of the book that the agent and author could now profit from?

Back to school: Comparing workflows

Another post I made to the discussion forum for the Whitireia Diploma in Publishing.

On Comparing workflows (XML-first or last)

I’m an XML believer. There, I said it. But I’m not so sure about the workflow, and the fact that publishers have been arguing over XML wokflows for at least a decade if not more shows it’s not a clear-cut issue.

Clearly there’s a need to maintain a good process around authors, editors etc working off the same file. I think Anne posted to clarify that there’d be some kind of sequencing involved. I’m unconvinced that authors will take to xml authoring; some editors will, and in larger publishing houses overseas there are dedicated technical editors doing just that. (And getting paid a little better than text editors.)

OpenOffice was mentioned briefly. Used well it will produce a clean document and good XML so a process of an author creating in their beloved Word and a technical editor converting that to OpenOffice and then to XML sounds simple. Note though the point about how even now we don’t use Word properly. A well-formatted document is fairly easy to work with and convert into other formats. OK, not that easy, but if the headings and body text are at least styled with the inbuilt styles it’s a good start, and typesetters have been working with styles for a few decades at least.

Typesetters are one of the sources of technical editors; XML is just another form of mark-up that they’ve learnt on top of various typesetting packages. And any publishing process as you know involves cleaning up what the author submits. The round-trip gets more complicated however if the author wants to make changes at final proof stage. Where to make the changes in a way that fully exploits the single-source XML file but avoids the the designer having to reset the XML?

OUP were struggling with this and the solutions weren’t going to be simple. For them it was worth the effort as they weren’t just talking about final proof changes but about round-tripping editions. At the time they were planning to do do the first edition as XML-last then generate a Word document for the author to amend for the second edition. Then a keying agency compares the document before and after the author amendments, identifies the changes and enters them into the XML file. Cheap it isn’t.

So who’s going to pay for it? Obvious question, and as you all note, for smaller publishers, probably no one. They’ll either muddle along and manage somehow or just not bother with XML. I think it’s possibly too easy for fiction and poetry publishers to say they don’t really need XML and excuse themselves the pain of an XML workflow and expense of XML editors. True, there’s more need in other types of publishing, but over time XML can pay off even for fiction – it’s easier to share and licence, reprint, store, archive etc, and is far more likely to work with future technology than a document from today’s version of Word. (But that’s an argument for XML rather than an XML workflow.)

One of the key decisions I think a publisher needs to make is whether they’re creating a book or a collection of data. If it’s the former (and for most local publishers this is the case), then an XML-last workflow will work well. Get the book written, edited, set, finalised and printed, then create an XML file as the source for any future renditions. If however a book is only one of the editions you’re planning, then XML-first (or XML-early) is going to offer a lot of benefits for single-source publishing all the formats.

Either way, an editor who understands markup and good formatting, and especially XML, may not be highly paid but will be highly valued.

Back to school: On Books or websites

Another post I made to the discussion forum for the Whitireia Diploma in Publishing.

On Books or websites

I hate to say it but I’m at a loss as to what to add to this discussion given your very thoughtful and sound responses. Seems there are two streams to the conversation, that aren’t entirely in opposition: one, that there really isn’t a difference in the two formats and that it comes down to how a reader wants to interact with content, and the other that it’s not the formats that matter so much as the content: web does well with short, pithy content, while books/ebooks do better with long-form reading.

The temptation here is to fall into the old argument (is it an old argument yet?) about whether it’s possible to read long-form online or in any electronic form at all. It’s a bit of a moot point as it’s obvious there will be readers who can and will; and there’ll be publishers happy to deliver the ebooks. Whether long-form reading is possible at a desktop is debatable; I can’t do it, though I often want to dip into a novel or similar to remind myself of what happened. A website might be better for that, but if on dipping in I decide I need to read the book again then I want easy access to another format.

Many of you have noted the similar functionality available (or possible) between eReaders and websites – both can do video, both can include added-value content (even if it’s what Booksquare might scathingly refer to as “some marketing person’s notion of value”), both can link to further sources, definitions, social media, etc. Are there any important differences left?

Permanence and impermanence is one that springs to mind. A traditional novel needs to be permanent; it’s an author’s construct and they construct it carefully and with thought as to how the reader will advance through the story, learning or losing the plot as the author wants. A scientific paper on the other hand can do well to use a bit of impermanence, especially in draft form. Releasing findings early and getting feedback from peers has been talked about off and on for years (no solid examples sorry – send in some if you have any), and the pace at which research can change suggests changes even after publication. Is the former better suited to format that I can download and have as an ebook while the latter needs to live on a database-driven website.

I’m less sure about reference works. Conventional wisdom is that dictionaries and encyclopedia should be up-to-date. It’s the basic Wikipedia model, a model that’s been accepted by dictionary publishers like OUP and Collins et al (though perhaps not adopted, given the institutional lethargy they tend to face).

But what about reference works as a snap-shot in time? What does something like the 1966 Encyclopaedia of New Zealand (published unmodified in Te Ara) tell us that an updated version doesn’t about how New Zealand saw itself in the 1960s? The constant updating is almost like saying that the present isn’t going to become history so we don’t need to leave a record. I think in that sense websites compared to books (whether e or p) are problematic regardless of what the Internet Archive’s Wayback Machine or the National Library’s web harvest hope to achieve.

So that ends on a bit of a down note – apologies, but it’s worth thinking about how to maintain permanent content while layering updated and current content over the top and how both flexibility and solidity can be included in all formats.