December 22, 2011

How to Build a House

We get a lot of feedback saying that customers want cookbooks. They want documentation that tells them how to solve a problem. They are not complaining that the documentation doesn't cover all of the features they need to use. They are complaining because they don't know which features to use and how to use them to solve the problem. How to address this feedback? Write cookbooks? For a typically resource starved doc team that would mean sacrificing feature documentation. Product managers generally frown on that. Besides cookbooks tend to have too narrow a focus for a general audience. Ignore the feedback and hold on to the belief that it is only a vocal few who aren't satisfied with the feature documentation? That won't stop the complaints. It also doesn't address the problem of users not knowing what to use when. It doesn't help users know how to use the product in a real sense. You could reframe how you approach the task of documenting the product. Instead of starting from "how do I use feature x?" can we start from "how do I solve problem y with feature x?"? I think you get documentation that satisfies both the users who want to know how to solve problems and the product managers who want the features documented. This approach is not easy. The difficulties start with the list of things we are given to write about. Typically the product manager hands out a list of features with little explanation of what problem they address for the users. I've seen lists that say things like "support WS-Security" with no statement of why that is useful. The writer needs to drag the use case for the feature out of the product manager if they know it. The writer has to also know enough about the product domain to have a general sense of what problems users are trying to solve. Once we figure out what people want or need to do with our products, documenting the features as part of building a solution should provide comprehensive coverage. There will still be bots that describe how to use an individual feature and there will still be reference material. It will just be organized around the idea of building a solution instead of using a toolkit. One question I was asked was about "less popular" features. My answer was that if a feature is used to solve a customer's problem it will get covered. If it doesn't solve a problem then maybe product management should consider yanking it.

December 15, 2011

Bit Literacy by Mark Hurst

I downloaded this book from Apple because it sounded interesting and it was free. It is not the sort of book I would pay money to read. The description made it clear that this was a book with something to sell.
Bit literacy is the product and Hurst's company will gladly sell it to your company for a hefty fee. I don't begrudge Hurst for this or think any less of him, but I don't ever feel like paying for a marketing tool.
It turns out that the book has some good stuff in it. It also doesn't shy away from offering up details on using the system. So, if you are buried under bits(e-mail, photos, power points, etc) it may be worth paying a few
The key to the whole system is rooted in standard productivity lore. Let the unimportant stuff vanish so you can focus on getting things done. Don't keep ten pictures of the same thing; only keep the best one. Don't save every scrap of e-mail because it buries the stuff you need to save.
Hurst is a proponent of inbox zero. You should empty your inbox at least once a day. E-mails are either junk to be discarded, to do items that need to be tracked, or information to b stored in an appropriate place. The inbox is not a place to keep to do items or information.
One other discussion I found interesting was the discussion of file formats for textual data. Hurst comes right out and says that Word, and its ilk, are never the proper choice for sharing text. He prefers plain text unless you require formatting. If formatting is required he prefers PDF.
Bit Literacy has some interesting ideas for writers as well. Hurst has a whole section on how to write in a bit literate manner. Basically it is all about front loading the point and brevity. Write in a way that respects that the reader is busy. This is not about pleasure it is about efficiency.
Hurst's book has some worthwhile points. There is something in there for anyone who uses a computer.

October 28, 2011

Fuse Message Broker Update

I pushed out a big update to the Fuse Message Broker library today. It includes a new product introduction that should be more informative. It also includes the first stab at documentation for administrators. Naturally, there are a collection of other little updates.
The long term goal is to reorganize the existing content to be more task oriented. That will happen as we fill in the gaps over the next year.

October 19, 2011

Goggle Docs Mobile

Yesterday I had to do some reasonably simple edits to a document in Google Docs, but I was away from my computer. I didn't think much of it since I had my iPad and Google Docs supposedly has a mobile interface.
Let's just say that I have significantly less hair after the experience. The mobile version of Google Docs was better before they allowed "editing". Now all you can do is add text to a document that doesn't have tables. You cannot add formatting or do any sort of work with a table-even the rudimentary stuff you can do in the desktop version. The document management features are similarly hobbled. I don't see the point unless it is to encourage people to use Android powered gadgets.
In frustration, I decided to switch to the desktop version of Google Docs figuring the iPad's screen was big enough. Sadly I kept getting scripting warnings. Sometimes I would get a few minutes before things went pare shaped. Often, however, I could barely get the interface to load. It was crappy all around.
I don't see the point in offering a half assed experience like that. If you are going to play the consumer friendly, open company then build consumer friendly products that work well!

September 16, 2011


It seems like everyone wants tutorials these days. In the last few days there have been many e-mails on the CXF users list looking for good tutorials. At FuseSource we keep getting asked for tutorials.
It makes sense in a lot of ways. Loads of people learn by doing and the best way to learn by doing is with a little guidance. Also, one way that a lot of code gets started is from the samples that ship with a development framework.
The trick is to write good tutorials. What makes a tutorial good? First and foremost, it should lead the reader to a successful outcome without frustrating them too much. A tutorial whose steps are set this environment variable and type mvn install is too easy and doesn't teach much. Not the other hand one that makes the reader type in tons of boiler plate code is too much bother. The reader should have to the core parts of work to learn the important bits. When they get it right, and they should easily get it right, it should work as expected.
A good tutorial should create something useful and close to real. HelloWorld doesn't quite cut it in a lot of cases. Even if the exercise is to simply show how to instantiate and publish a Web service, it would have more impact if the Web service did something. This may mean providing some implementation code along with the tutorial. If the goal of the tutorial is to implement something, make it interesting.
A good tutorial should be short. Short does not mean it must be easy. Short means it should be focused. A tutorial needs to show one thing. It can be simple like starting an ESB container or complex like using WS-RM or securing a Web service. But it should not focus on creating a secure and reliable Web service that runs in the ESB. If that is what you want to show, make it three tutorials that run into each other and provide code for readers who want to skip a head.
A good tutorial should also be authoritative and use best practices. While there are always, particularly with Apache projects, multiple ways to accomplish the same goal, a tutorial should only show the one the author, or his company, believes is the best practice for accomplishing the goal. New users like to be showed one way to do things. Once they get more experience they will discover other ways to approach a problem that may fit their specific needs better. The tutorial should, however, use an approach that will work in 99% of cases even if it is not the optimal solution for some.

August 16, 2011

DocBook to WebHelp - Better HTML Books

One of the projects the DocBook community sponsored for the 2010 Google Summer of Code was DocBook WebHelp. The goal was to develop a process for generating output that resembled WebHelp from DocBook. It resulted in a process that generated HTML output that uses jQuery to create a collapsible TOC and Apache Lucene to create a full text search index. It is slick.
The FuseSource documentation had been using HTML frames to accomplish something similar since we were part of IONA. While it looked good and got the job done, it was not a great solution. Some people complained that the frames didn't work in their browsers. We got complaints that it was hard to bookmark pages or to get links to send in e-mail. There were also complaints that the documentation looked very 1999.
So, a few months ago I started looking at what it would take to modify the FuseSource HTML publication process to use the DocBook WebHelp tools. It took a little doing because we have a pretty hefty customization layer, but all in all it was pretty easy. The guys on the DocBook mailing list were very helpful.
We started rolling out the new templates with the Fuse IDE 2.0 documentation . In the coming months the template will be rolled out across all of the FuseSource documentation.
The new WebHelp based HTML output is pretty slick. We added Disqus powered comment forms as well. They are not perfect, but incremental change is the name of the game. As we get feedback from customers, we will work to make them perfect.

User Interfaces

I've been reading an excellent book about bicycles, fiddling around with different kayak paddles, getting used to Lion, and looking for a writing program for my iPad. All of which got me thinking about user interfaces. Yes, bicycles and kayaks have user interfaces.
I've always believed that the user interface is the most important part of a product to get right. If the user interface is bad, it doesn't matter how efficient, powerful, elegantly designed, or feature rich a product is. If using it is harder than it needs to be, confusing, or generally less pleasant than other alternatives, very few people will use the product.
Kayak paddles are a good example of the generally less pleasant than other alternatives. There are hundreds of different kayak paddles on the market. Every paddler I know has one, maybe two, that they will choose to use. Other paddles may do if there is no other alternative. Why is this? It is usually subtle things that get described in fuzzy terms like the feel of the paddle as it moves through the water or the balance or the impression of it's power. The important thing for this post is that the subtle things make a huge difference to the user. I may prefer how a skinny wooden paddle feels and my friend may prefer how a scoopy carbon fiber paddle feels. So we make different trade offs for the feel.
Shifting systems on bicycles are similar. Shimano shifters use two levers for each derailure while SRAM uses one. I personally found the SRAM confusing, but the guy at the bike shop thought the SRAM system was better. Again, we make trade offs because of a user interface difference.
Bicycle shifting systems can illustrate a larger piece of why user interface is so critical. When I was in college they introduced index shifting-the lever clicked into place when the lever was "in gear". It made shifting a lot easier, except when things were out of tune, because you didn't have to guess about when you were in gear. When things were out of tune things shifting sucked, but it was worth the trade off. Later they moved the shifters from the frame and integrated them into the brakes. This was a major improvement because you didn't have to move your hands off the steering to shift. There are trade offs here too, but the improvement is well worth it. So a few little changes made a gigantic difference in the interface between the bicycle and the rider. Bicyclists are now more efficient and safer.
The user interface in software is even more critical than on a bicycle or a kayak. The user interface is in many ways all there is to the software from a users perspective.
Take the operating systems for example. Unix geeks will tell you all the super important things that operating systems do like efficient file systems and sandboxing apps for security and networking and blah blah blah. It is not that those things are not important, it's that most users do not care. The reason a user picks Windows or OS X or Ubuntu is largely because of how the user interface feels to use. Yes, there is some consideration of "will my software run on this thing" and that may be the deciding factor, but feel is critical as well. At work I'm a Windows user and for a long time I used Windows at home as well. The reason was that I was familiar with Windows and none of the Linux desktops proved to be worth the learning curve. I didn't know enough about OS X to spend the money. That changed when I met my wife. She had a Mac and after using it for a few weeks I was hooked. Why? It had nothing to do with it being more powerful, or more secure, or any of that. It was because it felt better to use. The little details made me happy and it all seemed more intuitive.
The same is true of writing programs. I pretty much hate Word and Open Office and FrameMaker. They are bloated, ugly, and unintuitive. I actually prefer using text editors over any of them. They all suffer from a similar problem: the features get in the way of the core mission of writing. Even when the features are supposed to help the core function of writing they get in the way. Take the autocorrect. It just changes the word under your fingers instead of simply underlining the word.
What I'm getting at that a UI needs to make completing the core task of the tool easy and enjoyable. Anything that gets in the way of accomplishing the core task of the tool should be eliminated. This of course means that the product's feature set must be properly assessed and the core task identified. Once the UI is stripped down to minimum required to facilitate the accomplishing the goal, the details that make doing the job enjoyable need to be added. For example, nice icons, readable text, and sensible animations that provide feedback without being distracting all make interacting with a tool more enjoyable.
Nailing the UI makes the difference between being the iPod and an MP3 player.

August 14, 2011

Fuse IDE Videos

The FuseSource doc team has been working on perfecting our ability to make high quality video tutorials. I tried a free tool called Wink which did OK, but lacked the polish of professional tools.
Part of the problem was also my lack of voice talent. My voice is perfect for mime and I really have a hard time reading off of a script - even one that I wrote.
Fintan Bolton started using Camtasia about a month ago and has had much better luck. Camtasia has some excellent editing and production tools. It can do zooms and transitions. It also has some nice audio editing tools.
Fintan also has an excellent voice talent on hand. His wife has a good radio voice and she can read a script well. Her voice over is clear and easy on the ears.
He has done two excellent videos that show Fuse IDE in action:
* Message Browsing and Tracing
* Throttler EIP

June 10, 2011


This is sort of nerdy and writerly, but this is supposedly a blog about technical writing--if there isn't a place for nerdy and writerly....
A few weeks ago there was an e-mail thread on the DocBook mailing list about the best practice for using the sect1, sect2, ..., sect5 tags. Someone suggested that nobody should use those elements and just use the infinitely nesting section element. They went as far as suggesting that the offending sectN elements be removed in the next rev of DocBook.
I agree with this sentiment. While I can see the basic desire to know if I'm writing a section or a sub-section, I rarely find it to be a need. My basic organizational style doesn't involve more than three levels of sections. It also, generally, ends up with a structure such that higher level sections are just containers for lower level sections. Therefore, it is rare that I ever find myself caring or wondering what section level I'm at. It is usually three or less and if I'm filling it with a lot of content it is the deepest level I'm going to hit for the topic at hand.
Even when I do find myself wishing the schema would tell me how deep I've gone or enforcing depth limits, the added flexibility of nested sections overrides the wish. The sectN elements make reusability tough. A sect3 must always be placed inside a sect2, but a section can be a sect3 or a sect2. I have seen several cases where a section was a top level in one guide and a nested section in another.
There were several people on the mailing list who did not agree with the idea of doing away with the sectN elements. One claimed that it was imperative to enforce section levels. Another just said that you can feel free to remove any elements you want, but it wouldn't be DocBook anymore. I can see that for some writers, or writing groups, it might be important to enforce section levels. However, there are ways of enforcing style guidelines than encoding them into a schema. We do it by being disciplined and occasional peer editing.
I can also see the value of keeping the sectN elements in place for legacy documents. This, in my mind, is the best reason for keeping them. There is a lot of content out there that is using DocBook and likely a lot that use the sectN elements. It doesn't hurt too much to keep the elements in place while encouraging people to use the section element instead.

April 28, 2011

Agile Is NOT a Process

Rant warning.
I've heard one too many people either ask me if I work in an agile environment or tell me that their team is doing agile development. The English teacher in me goes berserk when I think of the abuse they are doing to the word agile. They have taken a simple, clear word and turned it into marketing slang for any manner of product management process that doesn't involve long range planning.
I've worked on several teams that were "agile" and been to training to for another that was attempting to be "agile". Let's just say I've seen mixed results.
In fact, one of the most agile teams I worked on did formal long range project planning with feature specs and everything. We had an overall idea of what we wanted to build and worked towards it. However, we also did good beta programs and customer outreach during the development cycle. This way we could validate what we were doing against what was really needed. If we needed to change, we did. It probably helped that I worked with a very talented group of software engineers who liked modular architectures and an excellent customer facing product manager who understood the customer's as well as the the developer's.
The least agile team I worked on also followed the classic model. However, they were a monolithic, top-down driven team that was a slave to their process. Development cycles were months long and getting a new feature in, or responding to a change in priorities, was virtually impossible. I wasn't on this team long (it made me crazy), so I have no idea what confluence of factors made it so rigid.
This same team is now "agile" thanks to adopting SCRUM. They still have long development cycles, although they are split into two week sections. Getting new features, or responding to changing demands, is still virtually impossible. The things that have changed are that less people actually know what the feature set will look like, the writers are more in the dark than usual, and the release date is so fuzzy that it may simply arrive without warning.
I worked on an XP "agile" team as well. I'd put there agility in the moderate range, but they were that agile before XP. What XP did bring was more frequent meetings, less shared information, and a steep decline in release quality. We still marched for a set date and a predefined feature set, but product management reserved the right to change the feature set at will without changing the release date. We also did away with any meaningful sense of resource planning. More meetings and less quality is the kind of outcome I look for from a hot process.
One friend was telling me that his work estimates for a big project were rejected because they were done in hours. Since they are SCRUM "agile", estimates cannot be anything as tangible as hours. So, he went back to his desk, removed the word hours from the estimates, and waited a day before resubmitting the estimates. Naturally, they were perfectly OK. One of the funniest parts of this to me was that he was asked to do planning four months out.
At FuseSource we are pretty agile, without using any formal process. We keep in touch with customers, design in a way that makes responding to change easier, keep open lines of communication between all functional groups, and get the job done. Things can get a little chaotic, but that is pretty common in software development. The key point is that we make solid quality products and can respond to customer needs quickly.
So my point is that being agile isn't about doing "agile". Most of what people mean way they say they are on a team that is doing agile development is that they are following one of the "agile" product management processes that their management was sold. If the team isn't agile, no amount of mystical "agile" religion will make it agile. All doing "agile" will do is replace one set of rigidity with another. On the flip-side an agile team will be agile regardless of the process imposed on it. There are definitely processes that will be less efficient than others, but a truly agile team will either stop using them or find a way to make them work.

April 27, 2011

Video Tutorials

Video is all the rage these days, but I have been trying to avoid making them. It's not that I don't appreciate the strength of videos for marketing and for visual learners. It is just that my medium is static words on a page, not moving pictures with audio.
It came to pass that FuseSource wants video tutorials and the writers have been assigned the task of producing them. I did the first one recently and it was an interesting challenge - I should say series of challenges.
The first challenge is figuring out what software to use building the tutorials. There are a number of screen recording tools available, like Camtasia, you can record a WebEx, or you can go with a tool geared more towards e-learning and demo creation, like Captivate. I quickly ruled out the WebEx idea. Some consultation with co-workers who make video tutorials at Progress strongly suggested using Captivate over Camtasia. Captivate is more forgiving and more flexible.
The big problem with Captivate is the price tag.... So, I set out to find a freeware alternative if possible. Fortunately I stumbled upon Wink from DebugMode( I has most of the features of Captivate for free!!
Tool in hand, I created the video portion of the tutorial. Wink lets you record as a stream or based on mouse/keyboard clicks. I opted for the mouse/keyboard clicks method because that was what I was told worked best. So, I ran through the demo I was using for the tutorial and captured everything. This was a little nerve wracking because you want it to go smoothly. This is where doing it based on mouse/keyboard clicks comes in handy. If you record the demo as a steam, you have to restart every time you make a mistake. Using the mouse/keyboard saves the session as a collection of individual frames so you can remove mistakes later.
The resulting video capture was pretty good overall. A few places were choppy and in a few places the cursor jumped around a bit.
Wink lets you do a bunch of editing of the individual slides, so I could fix most of the choppy bits. I could also edit out any mistakes. It also allows you to add text boxes, images, and links onto the frames. This is one place where the price of the software is evident-there are not a lot of choices for button styles or text box controls.
Laying down the audio was tedious, but not because the tool makes it hard. In fact Wink makes it pretty simple. Doing audio is tedious for several reasons. The first is that I hate listening to my own voice for an entire day. Second, I'm not a trained voice talent, so I am not graceful at reading prepared texts. There are stops, stutters, strange tonal changes, pauses. I had to redo several portions of the audio multiple times to get it acceptable.
So, the first one is done. I've learned that doing a video takes a lot of prep work. You need to plan out what you intend to do and make sure that it is a) not too long b) visually interesting (watching a maven build scroll by does not make good video) c) going to work consistently. I've also learned that it takes a long time to make a short video. This first one took the better part of a day and it is only a few minutes long. I'm pretty sure I'll get better, but not so sure I can get faster.

February 23, 2011

Context First

I just listened to a talk by Brian O'Leary called "Context first: A unified theory of publishing"( In the talk O'Leary posits that the thing killing the publishing industry is something he calls the container model. Publishers, and authors, think of content in terms of the container it is intended to fill and in doing so leave the content's metadata, its context, on the table. A newspaper company, and its writers, think of the content the generate as articles that live in a single edition of the news paper. All of the context that links an article to other articles in time and space is lost. When the article goes on-line, there is an attempt to recreate the context, but it is never going to recreate the full context. The paradigm needs to shift so that context is a primary consideration when creating content. Modern customers live in a world of content abundance and thus do not value content as much as they value services that make content discovery easy.
What does this have to do with technical writing? A lot. A large chunk of what technical writers do is make information accessible and discoverable. If we primarily think in terms of books, articles, help systems, topics, etc. then we run the risk of forgetting how each chunk of information fits into the whole and making that clear. We also forget to add the metadata needed to make the content easily discoverable. It is the indexing argument for the digital age. Authors put the indexing off until the end and usually end up with less than ideal indexes or none at all. Now we skip the indexes because everyone uses search to discover content, but we don't add any of the metadata to make the content search better. We leave it up to full text search to pluck words off the page or title searches.
Thinking about content as part of a whole and adding metadata to improve content discovery are key parts of a modern digital technical library. It is also value that requires specific skills to create. Indexing is hard and so is tagging.

February 9, 2011

Someday ...

Interesting thoughts on building software that can be applied to documentation as well: AlBlue’s Blog: Someday ...: "Someday, all software will be built this way. I've been a fan of Git for a while now; I've written a few Git posts in the past including the..."
In the case of documentation, the source would be XML of some ilk and the build process fully automated.

January 28, 2011

Mark-up Smackdown

In the tradition of old is new mark-up languages are making a serious comeback for professional technical documentation. In the dark ages troff and other *roff variants ruled the roost. As more writers moved into technical writing and computer graphics got more powerful WYSIWYG tools like FrameMaker and Word rose to prominence. Now the pendulum is swinging back to mark-up.
Markup languages come in two basic flavors: presentation mark-up and structural mark-up. The difference is that presentation mark-up is focused on how the text is presented and structural mark-up is focused on the structure of the content. The difference is subtle but important.
Focusing on presentation, as most current WYSIWYG editors do, tend to favor a particular presentation medium (the Web, print, slides). While the presented content appears to have a structure because it has headings and lists, etc. the underlying source has no real structure. A writer is free to use lists and headings in any way they wish. This is nice for. Writer, but makes content reuse more difficult.
Focusing on structure removes the preference for any one particular presentation medium, but it does mean that more work is required to transform the source content into a presentation medium. The underlying source has an enforced structure to which a writer must adhere. The enforced structure is limiting, but allows for easier reuse.
Among the popular current presentation mark-up languages there is a pretty consistent preference towards Web presentation. Along with HTML there are a number of wiki mark-ups in use including MediaWiki, Confluence, and MoinMoin. There is also Markdown and Textile. They all attempt to make it easier to craft good looking content on the Web. To a high degree they all succeed. I personally like Textile and Markdown because they allow the writer to mix in HTML code to fill in gaps left by the mark-up language. The draw back to all of these languages is that they do not replicate all of the functionality of HTML and their syntaxes can be fidgety. If you don't do it exactly right the resulting output is bad and there are no good tools to help you get it right.
In terms of appropriateness for large technical documentation projects, presentation languages have serious drawbacks that counteract the oft touted claim that they are way easier to use than the alternatives. Because they leave structure up to the writer and the base unit of content is a page, it is difficult to recombine content or enforce uniform structure across a documentation set. They don't generally provide tools for indexing content or thinking of organizing content beyond a single page. They also don't have easy translations into any other presentation medium than HTML.
Structured mark-up languages, such as DocBook and DITA, push concerns about presentation into backend processing stages. The mark-up itself deals with content structure. They have enforced concepts of what makes up a unit of content. DocBook uses structures like chapters and sections. DITA uses structures like procedures and reference topics. These units of information are easily combined into larger structures like books and libraries. The drawback for some writers is that there is no easy way of seeing what the content will look like when it is published. A lot of people find it easier to write when they can see a representation of the final product and feel like they need control over the design of the content on the page. Another drawback is that structural mark-up tends to be more complex than presentation mark-up. The learning curve is steeper, but there are a numbered of tools available that support content completion for DocBook and DITA.
For large scale documentation projects structured mark-up, despite its steeper learning curve, has the edge over presentation mark-up. The freely available toolchains for them provide translation into Web and print formats. They have indexing mechanisms and provide structures to support content beyond a single page or unit.
The presentation mark-up will continue to be good choices for content developed by small teams or developers. For big projects down by professional teams, structured mark-up i the future.

January 14, 2011


I recently read an article by one of those magazine shrinks that said that the important part about new years resolutions isn't keeping them; it's making them that matters. The process of making resolutions forces you to imagine how you would like your life to be different and imagine actions you can take to make the dream real. The more specific the resolutions the better.
Since it is that time of year, I'm going to take the article to heart and make three specific resolutions; one for work, one for family, and one for me.
For work I resolve to work as part of a team that accepts nothing short of excellent. Far to often we settle for doing the minimum because of resource constraints or we accept crappy user interfaces because the developers know best. This year I resolve that I will strive to do what is needed to provide the maximum benefit for the end user. I will not simply accept good enough. I will not sit idly by when a developer creates a bad UI or tries to slip a buggy feature into a release because it is good enough or there isn't enough time to fix it.
At home I resolve to do more around the house. I have a bad habit of putting off washing the dinner dishes until H just does them. I also tend to let laundry sit without being folded. In the warmer months I'm not great at keeping up with the yard work. This year I will be better about getting this stuff done.
For myself I resolve to take better care of myself. This includes flossing every night, doing something active at least three times a week, and eating better. I'll think twice before stopping at the McDonalds for a super size Big Mac meal. I'll actually order non-fat lattes. I'll eat more veggies. I'll actually start using the gym at work.
I want to be around for Kenzie for as long as possible. I also want to be a good role model for her. I want her to grow up seeing her dad living a healthy lifestyle, treating his partner with love and respect, and striving to be the best the he can be.
I know I'll fall short of these resolutions, but I will try to get closer to living my life according to them.