February 23, 2011

Context First

I just listened to a talk by Brian O'Leary called "Context first: A unified theory of publishing"(http://vimeo.com/20179653). In the talk O'Leary posits that the thing killing the publishing industry is something he calls the container model. Publishers, and authors, think of content in terms of the container it is intended to fill and in doing so leave the content's metadata, its context, on the table. A newspaper company, and its writers, think of the content the generate as articles that live in a single edition of the news paper. All of the context that links an article to other articles in time and space is lost. When the article goes on-line, there is an attempt to recreate the context, but it is never going to recreate the full context. The paradigm needs to shift so that context is a primary consideration when creating content. Modern customers live in a world of content abundance and thus do not value content as much as they value services that make content discovery easy.
What does this have to do with technical writing? A lot. A large chunk of what technical writers do is make information accessible and discoverable. If we primarily think in terms of books, articles, help systems, topics, etc. then we run the risk of forgetting how each chunk of information fits into the whole and making that clear. We also forget to add the metadata needed to make the content easily discoverable. It is the indexing argument for the digital age. Authors put the indexing off until the end and usually end up with less than ideal indexes or none at all. Now we skip the indexes because everyone uses search to discover content, but we don't add any of the metadata to make the content search better. We leave it up to full text search to pluck words off the page or title searches.
Thinking about content as part of a whole and adding metadata to improve content discovery are key parts of a modern digital technical library. It is also value that requires specific skills to create. Indexing is hard and so is tagging.

February 9, 2011

Someday ...

Interesting thoughts on building software that can be applied to documentation as well: AlBlue’s Blog: Someday ...: "Someday, all software will be built this way. I've been a fan of Git for a while now; I've written a few Git posts in the past including the..."
In the case of documentation, the source would be XML of some ilk and the build process fully automated.

January 28, 2011

Mark-up Smackdown

In the tradition of old is new mark-up languages are making a serious comeback for professional technical documentation. In the dark ages troff and other *roff variants ruled the roost. As more writers moved into technical writing and computer graphics got more powerful WYSIWYG tools like FrameMaker and Word rose to prominence. Now the pendulum is swinging back to mark-up.
Markup languages come in two basic flavors: presentation mark-up and structural mark-up. The difference is that presentation mark-up is focused on how the text is presented and structural mark-up is focused on the structure of the content. The difference is subtle but important.
Focusing on presentation, as most current WYSIWYG editors do, tend to favor a particular presentation medium (the Web, print, slides). While the presented content appears to have a structure because it has headings and lists, etc. the underlying source has no real structure. A writer is free to use lists and headings in any way they wish. This is nice for. Writer, but makes content reuse more difficult.
Focusing on structure removes the preference for any one particular presentation medium, but it does mean that more work is required to transform the source content into a presentation medium. The underlying source has an enforced structure to which a writer must adhere. The enforced structure is limiting, but allows for easier reuse.
Among the popular current presentation mark-up languages there is a pretty consistent preference towards Web presentation. Along with HTML there are a number of wiki mark-ups in use including MediaWiki, Confluence, and MoinMoin. There is also Markdown and Textile. They all attempt to make it easier to craft good looking content on the Web. To a high degree they all succeed. I personally like Textile and Markdown because they allow the writer to mix in HTML code to fill in gaps left by the mark-up language. The draw back to all of these languages is that they do not replicate all of the functionality of HTML and their syntaxes can be fidgety. If you don't do it exactly right the resulting output is bad and there are no good tools to help you get it right.
In terms of appropriateness for large technical documentation projects, presentation languages have serious drawbacks that counteract the oft touted claim that they are way easier to use than the alternatives. Because they leave structure up to the writer and the base unit of content is a page, it is difficult to recombine content or enforce uniform structure across a documentation set. They don't generally provide tools for indexing content or thinking of organizing content beyond a single page. They also don't have easy translations into any other presentation medium than HTML.
Structured mark-up languages, such as DocBook and DITA, push concerns about presentation into backend processing stages. The mark-up itself deals with content structure. They have enforced concepts of what makes up a unit of content. DocBook uses structures like chapters and sections. DITA uses structures like procedures and reference topics. These units of information are easily combined into larger structures like books and libraries. The drawback for some writers is that there is no easy way of seeing what the content will look like when it is published. A lot of people find it easier to write when they can see a representation of the final product and feel like they need control over the design of the content on the page. Another drawback is that structural mark-up tends to be more complex than presentation mark-up. The learning curve is steeper, but there are a numbered of tools available that support content completion for DocBook and DITA.
For large scale documentation projects structured mark-up, despite its steeper learning curve, has the edge over presentation mark-up. The freely available toolchains for them provide translation into Web and print formats. They have indexing mechanisms and provide structures to support content beyond a single page or unit.
The presentation mark-up will continue to be good choices for content developed by small teams or developers. For big projects down by professional teams, structured mark-up i the future.

January 14, 2011

Resolutions

I recently read an article by one of those magazine shrinks that said that the important part about new years resolutions isn't keeping them; it's making them that matters. The process of making resolutions forces you to imagine how you would like your life to be different and imagine actions you can take to make the dream real. The more specific the resolutions the better.
Since it is that time of year, I'm going to take the article to heart and make three specific resolutions; one for work, one for family, and one for me.
For work I resolve to work as part of a team that accepts nothing short of excellent. Far to often we settle for doing the minimum because of resource constraints or we accept crappy user interfaces because the developers know best. This year I resolve that I will strive to do what is needed to provide the maximum benefit for the end user. I will not simply accept good enough. I will not sit idly by when a developer creates a bad UI or tries to slip a buggy feature into a release because it is good enough or there isn't enough time to fix it.
At home I resolve to do more around the house. I have a bad habit of putting off washing the dinner dishes until H just does them. I also tend to let laundry sit without being folded. In the warmer months I'm not great at keeping up with the yard work. This year I will be better about getting this stuff done.
For myself I resolve to take better care of myself. This includes flossing every night, doing something active at least three times a week, and eating better. I'll think twice before stopping at the McDonalds for a super size Big Mac meal. I'll actually order non-fat lattes. I'll eat more veggies. I'll actually start using the gym at work.
I want to be around for Kenzie for as long as possible. I also want to be a good role model for her. I want her to grow up seeing her dad living a healthy lifestyle, treating his partner with love and respect, and striving to be the best the he can be.
I know I'll fall short of these resolutions, but I will try to get closer to living my life according to them.

November 30, 2010

The Silent Bias

One of the frustrating things about working as a writer in engineering driven organizations is the persistent, although unspoken, bias against documentation. There is a lot of lip service given to how "good documentation is as important as good code", but the truth comes out when the rubber hits the road. Writers as not given the same respect as coders, coding take priority over reviewing, documentation is expected to rush, and documentation is rarely involved in product design except as an after thought.
Not too long ago, I was part of a discussion about how to handle documentation for one of the Apache projects. The community wanted to make it easy for people to submit documentation to the project, so they had set the barrier to entry well below that which was acceptable for code submissions. Before any code written by a coder without commit rights must have the code reviewed by a committer before it can be added to the code base. For documentation the only requirement for making changes was a signed CLA. Everyone agreed that documentation is as important as code, but didn't see how the different standards exposed the truth. Committers didn't want to have to review documentation changes because it was a hit on their time.
The other interesting thing about the discussion was the assumption that writers wouldn't, or couldn't, learn how to use a source control system or a build system. Most writers do find source control systems tedious, but they are also a fact of life for anyone working in engineering centric firms. A writer who is capable of writing good documentation for an open source project is more than capable of learning a build system and a source control system.
I won't go into my other sob stories about how documentation, even documentation that is integrated into a user interface, is treated like an after thought in the product design process. I also won't go into how many writers and writing teams are complicit in continuing the silent bias.
I will say that the silent bias does hurt product quality in the end. Crappy documentation lowers the user's impression of the product.

November 2, 2010

Marking it Up

At my last gig I spent most of my time writing in Frame Maker and Word. It was terrible. These are supposedly the alpha dogs of the computerized word manipulation tools, but they really point out the draw backs to all of the fancy WYSIWYG writing technology that has been developed. The text on the screen looks great, but the interface required to make the text look great gets in the way of writing. All of the buttons and fancy fonts and margin controls and table tools are distractions. The crashes and weird behaviors are annoyances.
When I joined FuseSource I returned to a shop that eschews all the fancy tools for writing in a mark-up language. Most of the documentation is written in DocBook XML. I'm not going to lie and say that DocBook is easy to learn or that it doesn't have it's quirks. You do need to learn which tags to use and when to use them. Some of the tags don't make sense or are overly complicated.
What I will say is that once you learn some of the tags, they fade into the background and you are left with just the writing. Your hands can stay on the keyboard because your not off looking for the bold button or using some key combo. You don't worry about how things will look because it is not in your face. You worry about what the words say when you are writing and wait until you publish to worry about what it looks like.
I've also been doing some work in other mark up languages like Confluence wiki text, Markdown, and Jade. Like DocBook they have their quirks. Unlike DocBook, they are true presentation mark up. The marks do not denote what the word represents it denotes how it will look. This makes the writing process a little different, but using mark up makes creating the content the center of the process. You still aren't confronted with a constant representation of the page. You still don't have to twist your hands into awkward twister poses to underline something.
When you are done creating the content, you can think about how it will be laid out in a medium. You can customize things for professional printing, for PDFs that will be printed out in an office, for a desktop Web browser, and for an iPhone. The layout process has it's own time and space. You can focus on that and not on the words.
Combining the writing process and the layout process is not more efficient. The human mind is lousy at multitasking.

October 15, 2010

Integrating Community Content with Commercial Content

One of the challenges facing a doc team working on commercial open source products is navigating the space between community sourced content and commercially sourced content. Does it make sense to use community sourced content? How much content does the team push back into the community? What policies need to be in place to facilitate the transfer content across the boundary?
To maximize efficiency, it makes sense to incorporate community sourced documentation into the commercial documentation. Leveraging the community multiplies the number of writers without increasing costs. The community also, in many cases, is where the most knowledgeable people (users and developers) live.
Using the community content creates a number of dilemmas:
The first is the legal ramifications of using the content. Can the content be reused? What citations and notices need to be included? Does using community content mean that all of the commercially generated content become owned by the community?
Once the legal issues are resolved, the next dilemma is a product question. If the bulk of the content is community sourced, what it the value being added by the doc team? Is it just repackaging to align with specific versions and ease accessibility? Does the doc team edit the content? Or is there some percentage of content that is added exclusively to the commercial documentation?
The product team also needs to determine how much of the work done by the doc team is kept internal. For the code,, the answer is that most of it is pushed back into the community. Only very targeted features are kept back as a value add. For documentation, the question is more nuanced. Documentation is almost exclusive a value add proposition for a commercial open source offering, so figuring out how much to dilute that value is difficult. If content is being taken from the community, the doc team has an obligation, morally, to return something back. At the very least, the doc team should provide editing support to the community. Beyond that, however, what is the right amount of back flow?
Once the product team has decided the strategic approach, the technical dilemmas rear their head:
Is the community content in a format that is easily consumable by the doc team? Many open source products use wikis of one flavor or another for their documentation. While wikis are easy to edit and provide some nice community features, they are not great for commercial documentation. They have versioning problems, limited formatting capabilities, limited work flow control, and a number of other deficiencies. Commercial doc teams typically work in either a dinosaur product, like FrameMaker or Word, or an XML format, like DocBook or DITA. Some wikis have tools for exporting to XML formats with varying levels of success. Some open sour projects are willing to switch to XML. In either case, there are hurdles that need to be overcome if content is to be shared.
Many open source projects are not great about versioning documentation. They end up with a single set of documentation with a mishmash of content and a lot of "in version x, but in version y". Commercial documentation cannot function that way. How do you ensure some level of version sanity when importing the community content?
Community content is either very stale or in a constant state of flux. Stale content is easy to merge, but constantly changing content poses a problem. Is there a single person responsible for handling merges? Is there a merge schedule? What about outbound merges?
While many communities generate good quality content, it is often in need of editing and vetting. How is that handled? Are the edits made in the community version and imported? Are they made internally and exported on a case by case basis? How is the community content vetted? Does need to reviewed by internal engineers? Or can it be assumed that the community's self-policing ensures that the content is technically accurate?
FuseSource has taken a firewall approach to solving the problem. The community content is used as an information source, but not directly copied. When content is contributed back into the community, it is added to the project's wiki alongside the other content. We do provide some editing support to the community sites. There have also been cases where the product team decided that a piece of content made more sense in the community, so it was simply contributed.
Initially, we choose this approach for technical reasons. We didn't have a clean way to get content out of a Confluence wiki and into DocBook. Fintan Bolton solved that problem with his Confdoc plug-in, but we have continued the same firewall approach. Now it is for simplicity sake. Building an import/export system and a set of policies about moving content back and forth the divide seems to be of dubious value in many cases.
Much of the community sourced content is excellent for highly technical users who are comfortable off-roading. It needs some serious work to be made appropriate for the average corporate developer. In many ways, it would be inappropriate to dumb down the content in the community. Solving the versioning issues are tricky. Is it worth the effort if the community does not seem to care?
We do directly import some reference content. The import is one way. We make edits in the community and then suck the content into our repository. It works because the amount of material sucked in is massive and easy to edit. There is, however, a decent amount of post-processing that needs to be done after the content is inside the wall.
Neither method is particularly efficient. I'd love to hear how other groups solve this problem.

October 8, 2010

Commercial Open Source Documentation

I'm back working on the Fuse products again and couldn't be happier. The fact that they are commercial offerings of open source projects makes working on them more interesting than working on commercially developed software. It is not that the products themselves are necessarily more interesting (although in this case they are), it is the challenges around documenting them that is more interesting.
In a purely commercial world, the whole process is controlled. The engineers are located within the boundaries of the company. They answer to managers that you can ping. The feature sets and release cycle are well defined and mostly static. The documentation requirements are usually spelled out by the product manager with some input from the writers. They are usually well understood early in the cycle. When the product ships, the documentation is frozen until the next release is planned.
In a commercial open source world, things are different. While some of the engineers work for the company, most of them are part of a larger community that are beyond the corporate wall. Feature sets and release cycles are planned, but the plans are usually changed due to unpredictable changes from the community. Documentation requirements tend to be fluid to match the product development process. Customers have a large influence on setting requirements for documentation. There is an expectation that improvements will roll out over the course of a products life cycle.
In addition there is the ongoing struggle between what to take from the community, what to offer back to the community, and what to keep as part of the commercial value add. Do you offer cleaned up versions of the community written documentation? Do you push content written internally back to the community? If so, what? What is the process for sharing content between the community and the internal documentation team?
Coping requires fluidity and focus. Being capable of changing when needed is crucial, but so isn't staying focused on the core value of what is being delivered to the customers. If a change doesn't make sense, you need to be able to see it.
The other thing that is crucial is a dedication to quality. It is far too easy to let quality slip in an effort to meet all of the demands. When you let quality slip, you let your value slip. The community can write documentation of questionable quality without paying for a writer or an offshore writer can be hired to do some rudimentary editing. Neither outcome is good for you or the customers. In the commercial open source market, customers do read the documentation.


What's more important: technical or writer?

Lately I have seen, and heard, a number of discussions of what skills are most important in a technical writer. The conclusion reached in most of these discussions saddens me. It seems that conventional wisdom is that technical skills are considered primary. One recruiter told me that all of the positions she has open are for programmer/writers with an emphasis on programming.
I can see why businesses would want technical writers that are highly technical. It lightens the burden on the engineers because the writers don't ask as many questions. It also means that the writers can do more than just write documentation. They can do QA or possibly code. The business doesn't have to waste as much money on documentation.
Ideally, businesses, and engineers, would like to see technical writers cease to exist. They cost money, ask too many questions, delay delivery dates, and whine about usability issues. The only value they serve is to create a bunch of content that customers demand, but never read.
What I cannot understand is why technical writers believe that technical skills are primary. The "technical" in the title is an adjective describing "writer." The value of a technical writer is that they can take jargon laden technical information from engineers and turn it into something readable by the uninitiated. They can write a process in a way that makes it clear. They can distill complex technical topics into chunks that a user can digest. Writing is the primary skill.
I'm not arguing that some technical skills are not important. My background in software engineering has been invaluable to me. However, it is my writing skills that make me good at my job. I've worked with several technical writers with excellent technical skills who were terrible technical writers. Sadly, they poor quality of their content usually is overlooked because they fit in with the engineers.
Writing first; technical second.

August 27, 2010

Getting Lost in Features

I have two pet peeves when it comes to "features". One is that we get very caught up in documenting a product's features and not how to use the product. The other, and the one this post is about, is that we tend to crave more features even when they don't make the product any better.
Yesterday I came across two articles. The first article was about a CS professor who uses old BBC micros to teach his students how to program. The micro strips away all of the "features" of modern IDE's and forces the students to think about the code. The second article was a meditation on the possibility that documentation efforts have become so focused on the shiny presentation features and rich editing features that quality content has been overlooked.
The "feature" overload problem makes me remember my early days of MP3 players with horror. I wanted no part of the iPod. It only supported a few formats, didn't have a radio tuner, didn't record, couldn't edit song titles, didn't have a way to make playlists on the fly..... Instead I raced out and bought a fancy Rio that supported every format known to man, recorded, had a radio tuner, and even had network syncing. The thing rocked, but it was a bitch to use. Worse, the radio sucked and I never recorded anything. The only "feature" that was useful was the network syncing because I didn't have a place near the computer to put the syncing cradle. As more of my friends got iPods and I got to use them, I became very jealous. The iPod looked good and was easy to use. Less features, but a better solution to the problem. When the Rio died, I replaced it with an iPod and never looked back.
I look at some of the Webhelp systems in the world and ask myself how much does all that Javascript goodness really add to the usefulness of the documentation. Does a collapsible TOC really make it easier to find things? Does putting the index on a separate tab make it easier to use? Does the half-assed search capabilities built into the system make it faster to find information? What about the buttons that collapse the TOC pain or sync the display to the TOC? They all look cool, but would spending more time on good content and good organization be more valuable? My answer is that the "features" of most documentation UI's are not that helpful and that better content is usually a good answer. Personally, I find using search painful and would prefer a halfway decent index any day.

I have the same problem with a lot of the documentation tools that are on the market. Framemaker, RoboHelp, ePublisher Pro, Flare, Word etc. are all powerful feature rich tools that are intended to make documentation production easier. They all, in their own ways, takes the writer away from the content. WYSWYG editors place too much emphasis on page-layout and distract from the words on the page. To cope with all of their features, and the odd missing features, takes a learning curve and often you are forced to make compromises to fit the tool.
Even some of the XML editor's for documentation can go overboard. They all do auto-complete to various degrees and have WYSWYG views. Give me a simple editors that validates my mark-up and spell checks and I'd be happy.
Let me focus on the words and not the tool.
We need to remember what problem the product is intended to solve and make it excellent at solving that problem. Strip away all features that do not further that goal.

August 18, 2010

Documentation UIs

Another writer and I were talking at lunch he bemoaned the stagnation in the thinking about how users interact with documentation. For the most part, the UIs for documentation haven't evolved much in the past 10 or 15 years. The two forms are PDFs and basic HTML. PDF is still PDF - a poor electronic substitute for paper although now some PDFs can be annotated. HTML documentation runs the gamut from boring to more boring. Most of it is basic 90s HTML mark-up with some sort of JavaScript collapsable TOC and basic JavaScript driven search capabilities. You may get linear navigation aids in addition to the basic browser buttons, but the often the JavaScript screws up the browser buttons. Help systems are just HTML documentation with a different framework wrapped around it. Eclipse Help looks very similar to the HTML help most groups churn out.
My fellow writer placed the blame for this squarely on Adobe. They purchased all of the proper assets to make real change (FrameMaker, PDF, Quark, Illustrator, Photoshop), but left most of it to languish. FrameMaker has not substantially changed in the 10 years I've been doing documentation. In fact, the only real change from 1995 that I can see is that FrameMaker is now a Windows only product. PDF has also stagnated.
Adobe does deserve a fair amount of blame for the stagnation, but they are not alone. All of the tool vendors in the documentation space have allowed things to just stay flat. WebWorks has improved their product's functionality for the writer, but the output is still JavaScript laded HTML. Flare is not much better. The DITA OT and the DocBook tool chains cannot claim any highground here either. The DocBook HTML output is definitely stuck in the last century and the DITA OT HTML isn't much better.
All of these tools allow you to improve the look of the output by messing around with the CSS files or adding your own processing logic to them. For example, there is no reason I couldn't make the DITA OT generate a fully collapsable floating TOC that is rendered using CSS beyond my skill with XSLT and my time. Therein lies the rub. Most people working in technical documentation today do not have the skills or the time to do major customization work beyond what the tools provide natively. Some tools make it hard, some just don't provide any help. Even when the customization methods are easyish and well documented there is little time.
While the tool vendors share some of the blame for the stagnation, it is the professional writers and the users of documentation that share most of the blame. We have not demanded better interfaces to work with documentation. We have accepted that it must live with the subpar UIs we've suffered with for years. Don't we really just want print of HTML served up Google style? Or is it that we have been conned into believing that is the best we can get?
Lately the world of electronic books has been showing us that we can expect more. iBooks, the Kindle apps, and a slew of other eReading platforms have recently appeared on the market that have shown that reading electronic content does not need to be boring or painful. I just read a beautiful version of Alice in Wonderland on my iPad. The typography was excellent, the page layout was top notch, the interface was intuitive, I could look up words in the dictionary without too much disruption. It was a dream. I've seen comic books on the iPad as well and they too work very well. In most cases they are just trying to replicate the feel of a printed publication, but some of the elements go beyond that. New eBook platforms allow for embedded audio and visual content as well as text.
When it comes to help systems, a different UI paradigm is needed from the book paradigm. The current bland HTML page with a generic search feature cannot be the best we can do. One thing we can do is break the TOC model. It provides a false sense of "bookishness" to a help system that is essentially non-linear. The search features can use some bulking up as well. Maybe adding features that allow users to save searches. How about letting user's build their own TOC based on their use patterns or save breadcrumb trails. More connection between the help system and the UI it documents would be good as well. Some help systems already have the ability to open dialogs and trigger actions in the associated products, but typically this feature is hard for a content developer to trigger.
Don't we owe it to our users to demand more and better usage paradigms. Ones that make the task of finding and consuming information easier and are more than just functional?

August 16, 2010

A Tale of Two Doc Types

As I see it, modern technical documentation comes in two basic formats: books and help systems. The key differentiator is the reader's expectation about how the documentation is used and not the actual production medium.
Books are intended to be used in a linear, narrative fasion. A user expects to start at one point (a chapter head or section head) and then move forward through the pages to gain more detailed information about the task identified on the start page. A chapter starts off with an overview of ESB processes and then proceeds to provide more granualar information about them. Location has meaning and the content flows together to build a complete picture.
Help systems are intended to be used contextually like a web. A user expects to be dropped into the system at a topic that is relative to the task at hand and then traverse links to any relative information. The idea of chapter and section are meaningless. Location is valueless and each page is a relative island with bridges leading to other islands that may provide further information.
In PDF, and print, the difference between a book and a help system is hard to define except that the help system will almost always feel incomplete and disjointed. PDF, like print, is meant for linear content since it is harder to jump around and the viewers all prominently display location.
In HTML, things are more flexible. HTML books can feel linear and help systems can feel web-like.
HTML books should prominently display their TOC and provide some indication of position in the TOC. They should also provide navigation buttons for traversing the content linearly. Clues like the word chapter and section also help orient the user.
HTML help systems, while most have a TOC system, can dispense with a majority of the location trappings since location has little value. Help systems should not have linear navigation tools, since linear navigation does not always translate into the expected behavior for the user. Words like chapter and section should be expunged since they confuse the user by implying organizational constraints that are not valid in the help system.
Things get confusing for help systems since they often have a TOC panel that displays a linear organization of pages that appear to be grouped into nested structures like chapters and sections. Look at an Eclipse help system's TOC and it is easy to think that it can be used like a book. Start using it that way and you will soon become frustrated because, even if there was a way to navigate linearly, the actual content is not linear in the same sense as a book. Some high-level topic may be a big process that the nested topics provide detailed sub-processes and context, but it is rarely the case that it flows like a book. It is intended so that a user can quickly land on a single page, perhaps by triggering F1 from the UI, and get the information they need. They can choose to explore the provided links to other topics or not. In a book, they would likely need to click through to the pages that follow for more information.
A book is a longer, more descriptive, leisurely way to learn about something. It is intended as a way to deliver content at a deep level in a pedigogical manner. A help system is a collection of quick cards that deliver targeted information so a user can get on with the task at hand. The idea that a writer can create a document that fits both purposes is foolish. The result, like all attempts at making hybrids, will be something that does neither job well.

August 13, 2010

Documentation in an Agile World

Most agile frameworks are programmer focused and don't talk about the documentation part of product development. Its not that agile frameworks don't consider documentation an important part of product developement. It's that documentation is a black art to programmers and is often hard to quantify (and we writers like it that way: it allows us to resist change and maintains our sense of creativity).
For example take the following story:
As a developer implementing an algorithmic trading application, I need to put price constraints in place to ensure that the algorithm operates within safe boundaries.
The acceptence criteria is pretty clear "it is done when the developer can add price contraints to his trading algorithm". The functional part of this is easy to measure and understanding what to build is also clear. But what about the documentation part of the story? What is the acceptence criteria? "A procedure that explains how to implement a price constraint." That is not easily verifyable. I could write a procedure that is simple and just walks the developer through the steps:1: add this line of code. 2: include these two parameters in the argument list. etc. Does that really satisfy the requirement such that the developer knows how to implement the price constraint? It is ultimately going to involve some subjectivity.
The second question is wheter the story, with the documentation included, could be fit into a single sprint. The documentation cannot really be written until after the implementation is underway and cannot be QAd until after the implementation is complete. The developer will also need to spend some time with the writer providing information. So, should any story with documentaiton impact be split into two stories: one for engineering and one for documentation (the documentation story is dependent on the engineering story)? IMHO splitting stories into a programming story and documentation story is the best way to do it, but it leaves open the possibility that the documentation story will get prioritized out of the release.

Another question that including documentation in agile frameworks that I've heard involves having skin in the game. The concern is that writers, because they cannot write code, may have sprints where they have no active tasks (no skin in the game). In traditional development models, the programmers get started building stuff right off the bat and the writers settle into what they call "disovery mode". The writers are researching what the programmers are building. At some later point in the release cycle, after the programmers have built a bunch of stuff, the writers start writing. So, it makes sense that there is a concern that in early sprints, writers will have no skin in the game. However, I think that this is not going to be a concern in actuality. The writers always have user requests that filter in from previous releases that can be completed in early sprints. They also have areas where they know the documentation can be improved. Given that functionality will start appearing after the first sprint, the writers should have something new to document by sprint number two.

One more question. This one combines concerns from both the previous questions: How do you create acceptence criteria for a story like "understanding the user roles for Apama" or "research the Sonic WS-Security implementation"? My opinion is that you don't because they are not stories they are tasks that need to be completed as part of resolving a story. The task may require an entire sprint, but that does not change the fact that it is just a task. I also don't think that the task will take an entire sprint because such tasks will be constrained tighter than "research WS-Security" to something more along the lines of "research how to secure a Web service using Kerberos tokens." It is possible that some reasearch tasks may take more than a sprint, but programmers may also need to spend a sprint doing research on ways to implement some functionality. The framework has space for it.