Thursday, November 11, 2010

Book Review: The Cuckoo's Egg

The Cuckoo's Egg Tracking: a Spy Through the Maze of Computer Espionage by Cliff Stoll

In 1986, Cliff Stoll's boss asked him to investigate a $0.75 accounting error for the use of their lab's computer. He quickly discovered that a hacker had penetrated their computer system and was attempting to do the same to the Milnet computers it was connected to. The situation quickly cascaded into a multi-national search for a malicious hacker-spy involving multiple "three letter" government agencies. At the center of it all was Cliff, an astronomer turned system administrator turned digital sleuth. In The Cuckoo's Egg, Cliff provides a detailed account of his adventure and eventual success.

I really enjoyed The Cuckoo's Egg. Although I was vaguely aware of the story, I only recently learned of the book. I was immediately hooked and struggled to put it down.

The Cuckoo's Egg provides a detailed description of 1980's computing, a subject for which I have an irrational fondness. It's a great reminder of how innocent a time that was and how far technology has come.

Timeless, though, were Cliff's creativity and persistence. I wish more people today put as much effort into solving problems, even apparently small ones. Cliff's story remains an inspiration.

Cliffs political retrospection added an unexpected dimension to the story. His initial mental model of government agents was comic book-esque. During the pursuit, he gradually realized that they were just normal people with similar values. Cliff's willingness to alter his political views was also encouraging.

PDF versions of the book are available online. The book was also summarized in the NOVA episode The KGB, The Computer, and Me albeit without much of the detail that made the story interesting.

If you like computer history, cyber-security, and mystery stories then you will likely enjoy The Cuckoo's Egg.

Friday, November 5, 2010

Book Review: Fermat's Enigma

Fermat's Enigma by Simon Singh

In 1637, Pierre de Fermat, a renown amateur-but-genius mathematician, was reading Diophantus's Arithmetica and wrote the following margin note:

I have discovered a truly marvelous proof that it is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second into two like powers. This margin is too narrow to contain it.

In other words, although there are many three integer solutions to Pythagoras's Theorem,

X^2+Y^2=Z^2

There are no three integer solutions for higher powers,

 X^3+Y^3=Z^3
 X^4+Y^4=Z^4
....

Unfortunately, no written record of Fermat's proof has ever been found. Worse, Fermat had a reputation for pranking his fellow mathematicians by claiming to have secretly solved impossible problems. For 350 years, it was uncertain if a proof ever existed. But that didn't stop a lot of people from trying, including some of the best mathematicians in history.

In this book, Simon Singh provides a thorough account of the many efforts to prove Fermat's Last Theorem. Particular attention is given to Andrew Wiles's successful solution in 1994, the result of a seven year solitary effort that shocked the mathematics world.

The book is structured very well. Singh expertly interweaves the history and mathematics behind Fermat's conjecture in an easy to understand and engaging manner. Singh describes the complicated mathematics involved just enough to appreciate Wiles's solution without going into too much detail. The narrative flows evenly and holds the reader's attention well.

Wiles's story is incredible. He was first fascinated by Fermat's Last Theorem as an adolescent. After earning a PhD in mathematics, Wiles found himself uniquely positioned to pursue a proof. He made the bold decision to both work in secret and devote all of his time to developing a proof. For seven years, Wiles worked night and day in isolation until he finally succeeded. After a fatal flaw was found during peer review, he spent an additional year fixing the proof. Wiles's focus, dedication, and determination are truly inspiring.

Fermat's Last Theorem is great book if your into math, history, and solitary geniuses overcoming the odds.

Sunday, September 26, 2010

Saving weblinks to org-mode from Safari

Each day, I come across numerous web articles, blog posts, newsgroup posts, etc, that appear interesting. Often, I discover them while working on another task. To avoid distraction, I typically save their links for later review. Sometimes I drag the links to my desktop. Sometimes I bookmark them in my browser. Sometimes I send them to myself via email. Sometimes, I post them to my delicious account. It's time to admit that I need a better process.

In a prior post, I mentioned that I use Emacs's org-mode to organize my notes and tasks. I recently setup org-mode's Capture capability to easily record the deluge of thoughts that come at random throughout the day. While reading the documentation, I discovered the solution to my link saving woes, org-protocol. In particular, the ability to capture links to an org file directly from a web browser as demonstrated in this screencast.

Imitating the screencast on OS X turned out to be harder than I expected. So, I thought it worth wile to post my approach.

To begin, it's worthwhile to understand how org-protocol captures work. org-protocol is based on Emacs server which allows applications to use Emacs for text editing. A common practice is to have shells use an already running Emacs instance rather than starting a new one. This is accomplished by a helper program, emacsclient, that communicates with the primary Emacs instance. For org-protocol captures, emacsclient is launched with a specially formatted argument.

emacsclient org-protocol:/capture:/tname/http://foo.com

By advising the server-visit-files function, org-protocol detects such arguments and creates org-mode entries from them. A powerful template facility is provided to specify how the argument information is transformed into an org entry. Multiple templates are supported and selected by the argument's tname field. The remainder of the argument specifies the URL, in this case http://foo.com, and an optional note (not shown in the example).

Templates are specified by elisp code like the following,

(setq org-capture-templates
      '(("tname" "Link" entry 
        (file+headline org-default-notes-file "Links to Read")
        "* %a\n %?\n %i")))

This template, called tname, tells org to save the entry in the default notes file under the header "Links to Read" with the url as a sub-heading. This results in something like,

* Links to Read
** http://foo.com

See the Emacs documentation for all of the template facility's capabilities and options.

In the screencast, two "tricks" are used to have FireFox call emacsclient with the argument needed to capture the present page. The first is a bookmarklet that creates the appropriate emacsclient argument,

javascript:location.href='org-protocol://capture://tname/'+
      encodeURIComponent(location.href)+'/'+
      encodeURIComponent(document.title)+'/'+
      encodeURIComponent(window.getSelection())

The second trick takes advantage of the fact that the emacsclient argument is formatted as a URI. The topmost component of a URI is called the URI scheme. The standard scheme, http, represents HTTP hyperlinks and is handled directly by the browser. Many other URI schemes exist and most browsers support launching separate programs, sometimes called URI handlers, to process them. In the screencast, FireFox is configured to launch emacsclient when links with the org-protocol scheme are "clicked".

Duplicating this functionality with Safari on OSX turned out to be harder than expected. In an effort to make things easy, OSX provides the Launch Services API to automate the registration of URI schemes and their handlers. Unfortunately, a manual method isn't provided to specify new URI schemes and handlers. This means Safari can't simply be told to launch emacsclient when org-protocol links are "clicked".

While searching for a solution, The worg website led me to the org-mac-protocol project. Although promising, org-mac-protocol uses AppleScripts executed via a drop down menu. Call me lazy but I really like the bookmarklet approach. org-mac-protocol also provides far more functionality than I was interested in. I like to keep things simple.

A second look at the Launch Services documentation revealed that two mechanisms are provided to register URI schemes and handlers. Using a programmatic API, applications can, at execution time, register themselves as the handler for new URI schemes. Alternatively, applications can include the URI scheme and handler information in their application bundle property list. Of the two approaches, the property list approach looked like the best to pursue.

Property lists are used by OSX to store application and user settings. They're essentially Objective-C objects serialized to XML. Every OSX application contains a default property list in its application bundle that the Finder reads after certain events. While processing property lists, the Finder will register any URI schemes and handlers it finds with Launch Services. With this approach, all that is necessary to register a new URI scheme and handler is to edit an XML text file.

Unfortunately, modifying Emacs's application bundle plist didn't seem to be an option. I didn't see a way to specify the helper program emacsclient as the URI handler.

I knew from prior experience that AppleScripts can be packaged as application bundles. I reasoned that I could write an AppleScript to launch emacsclient, save it as an application bundle, and modify its property list to register the script as a handler for org-protocol URIs. I suspected that this had been done before and a quick google search led me to this stackoverflow thread.

Using AppleScript Editor and the stackoverflow thread as an example, I wrote the following script and saved it as an application bundle called EmacsClientCapture.app.

on open location this_URL
   do shell script 
      "/Applications/Emacs.app/Contents/MacOS/bin/emacsclient " 
      & this_URL
end open location

Next, I edited the script's plist at the path,

EmacsClientCapture.app/Contents/Info.plist

And added the following XML elements just before the final </dict></plist> tags.

<key>CFBundleIdentifier</key>
<string>com.mycompany.AppleScript.EmacsClientCapture</string>
<key>CFBundleURLTypes</key>
<array>
  <dict>
    <key>CFBundleURLName</key>
    <string>EmacsClientCapture</string>
    <key>CFBundleURLSchemes</key>
    <array>
      <string>org-protocol</string>
    </array>
  </dict>
</array>

I then moved the application bundle to the /Applications directory. This caused the Finder to read the property list and register EmacsClientCapture with Launch Services as the handler for the org-protocol URI scheme.

I added the bookmarket described above to Safari and viola! I can now click on the bookmarklet and save a link to the current page in an org-mode file. No more disorganization. Of course, it's now easier to collect distractions but that is a problem for a future post.

Tuesday, September 21, 2010

Book Review: On Writing Well

On Writing Well by William Zinsser

Clarity. Simplicity. Brevity. Humanity. Those are the four attributes of good writing that Zinsser promotes and teaches in this classic book on writing.

The book is structured into four parts. Part one, Principles, covers fundamental topics like simplicity, clutter, style, words, and usage. In part two, Methods, Zinsser teaches how to structure a coherent story. Part three, Forms, provides advice on writing interviews, travel stories, technical articles, business communications, and other types of writing. Part four, Attitudes, discusses common feelings and decisions during the writing process. Throughout the book, Zinsser uses examples to illustrate and reinforce his points.

I really liked this book and agree strongly with Zinsser's values. A lot of modern writing is long-winded, complex, and content-free. Simple and clear writing is not only more effective but also more enjoyable. I've always tried to use a simple, direct writing style. I'm eager to apply the book's advice to further improve my writing.

I was greatly encouraged by many of Zinsser's comments on the writing process itself. For example, I consider myself a slow writer. I often spend a lot of time revising my writing to get it "just right". Even simple things like emails and blog posts seem to take an excessive amount of time. I frequently feel insecure about this so I was glad to read the following,

Writing is hard work. A clear sentence is no accident. Very few sentences come out right the first time, or even the third time. Remember this in moments of despair. If you find that writing is hard, it's because it is hard.

Similarly, I was inspired by the following comment in the chapter "Enjoyment, Fear, and Confidence",

Living is the trick. Writers who write interestingly tend to be men and women who keep themselves interested. That's almost the whole point of becoming a writer. I've used writing to give myself an interesting life and a continuing education. If you write about subjects you think you would enjoy knowing about, your enjoyment will show in what you write. Learning is a tonic.

I started this blog to learn by writing about my many interests. I find that explaining a topic often leads to deeper insights. I hope that writing and blogging will help me to continue learning and leading an interesting life.

Tuesday, September 7, 2010

Brain fitness videos

The "Thoughts about thinking" post motivated me to improve my brain fitness. To that end, I found the following three talks very informative.

I really appreciate Google posting internal guest lectures to YouTube and GoogleVideo. I have watched many of the computer science talks and learned a lot. It's nice to know that talks on many other valuable topics are also available.

Sunday, August 22, 2010

Doing less than your best

I once learned a valuable lesson - never do well a task that you don't want to do again.

The situation started out innocently. A task for the project I was working on needed to get done. I disliked the task but grudgingly did it for the benefit of the team. I did my best to do the task well with the expectation that I would be rewarded with more enjoyable work. Instead, I was asked to keep doing it. I protested unsuccessfully and was soon miserable. I couldn't understand how acting selflessly and doing a good job led to such "punishment".

I discovered that when you do any job well the benefiters want you to keep doing it. If they're confident you'll produce good results, they would rather encourage you than find a replacement (who may not do as well). I eventually realized that I shouldn't have done a good job in the first place.

Initially, I found this insight disturbing. As a child, I was taught to always do my best at everything. As I grew up, I tried to do my best at school, sports, hobbies, and part-time jobs. Gradually, "always doing my best" became a core attribute of my self-image. Therefore, the notion of consciously doing less than my best just seemed wrong.

Then I made a discovery - some of my role models consciously do or did less than their best to avoid undesirable work.

For example, the legendary physicist Richard Feynman discusses "actively acting irresponsible" in this BBC interview. He also cautioned against administrative roles in this letter to Stephen Wolfram. Donald Knuth and Neal Stephenson are well known for being "bad" at correspondence. On a somewhat related note, Paul Graham warns in this essay against distracting thoughts and activities. More personally, some of my mentors have privately admitted to performing badly at tasks that they don't want to do.

From these examples I've formulated the following three guidelines:

  1. Say "no" to any undesirable tasks.
  2. If unavoidable, only do an adequate job.
  3. Focus all remaining time on excelling at the work you most want to do.

Following guidelines 1 and 2 will hopefully provide more time for pursuing guideline 3. The ideal result is to be so highly valued for doing work you enjoy that you won't get asked to do anything else - the opportunity cost for distracting you will be too high.

For myself, I expect a lot of time and practice will be required to put these guidelines into consistent application. After all, a self-image can be hard to change. Thankfully, I have role models to encourage me.

Monday, July 19, 2010

Thoughts about thinking

Challenging work assignments, family matters, and home maintenance projects have kept me very busy lately. Hence, the significant drop-off in blog posts over the past couple of months. I'm hoping to correct that soon.

Not surprisingly, I've been thinking a lot about leisure time - in particular its benefits for creative thinking. My reflections are guided by two lectures on the topic.

The first is a GoogleTechTalk by Dr. David M. Levy, No Time to Think. In the talk, Dr. Levy asserts that deep contemplation not only promotes creativity but also provides a sense of calmness and satisfaction. He further argues that deep thinking cannot be forced directly - instead we must make ourselves available to it by seeking out silence, and sanctuary. Dr Levy observes that this requirement for sanctuary conflicts with modern social pressures to multi-task, remain in constant communication, and solve issues through repetitive searches of existing information.

The second is a lecture, Solitude and Leadership, by William Deresiewicz at West Point in 2009. The talk is deeply insightful and worth reading in its entirety but, for this post, can be fairly summarized by the following three points:

  1. Original thinking is a core attribute of leadership.
  2. Formulating original thoughts requires long periods of concentration and distance from the thoughts of others.
  3. Quiet solitude is necessary to do both.

Like Dr. Levy, Mr. Deresiewicz laments social pressures to multi-task, remain in constant communication, and rely on existing information to solve problems. To quote,

Multitasking, in short, is not only not thinking, it impairs your ability to think. Thinking means concentrating on one thing long enough to develop an idea about it. Not learning other people’s ideas, or memorizing a body of information, however much those may sometimes be useful. Developing your own ideas. In short, thinking for yourself. You simply cannot do that in bursts of 20 seconds at a time, constantly interrupted by Facebook messages or Twitter tweets, or fiddling with your iPod, or watching something on YouTube.

and,

Here’s the other problem with Facebook and Twitter and even The New York Times. When you expose yourself to those things, especially in the constant way that people do now—older people as well as younger people—you are continuously bombarding yourself with a stream of other people’s thoughts. You are marinating yourself in the conventional wisdom. In other people’s reality: for others, not for yourself. You are creating a cacophony in which it is impossible to hear your own voice, whether it’s yourself you’re thinking about or anything else.

Thinking originally about difficult problems is an activity that I deeply enjoy and find satisfying. Like Dr. Levy and Mr. Deresiewicz, I reached similar conclusions about distracting activities years ago and have since guarded my time and attention vigorously. I've often been teased for my resistance to social media. At times I've felt self-conscious about this choice, even deficient or outdated. These talks give me renewed confidence in my choices.

The challenge, as indicated at the outset of this post, is finding the time to think. A hard task given the pace of modern society and the high-tech industry in particular.

Wednesday, June 23, 2010

Org-mode hack: tasks done last month

I'm a big fan of Emacs's org-mode. Over the past year, I've started using it for everything - tracking tasks, taking notes, and drafting all my reports, papers, and blog posts. Org-mode is the only task-tracking software that I've used for more then a week.

At work, I am required to produce a monthly status report. To automate part of the process, I figured out a way to have org-mode produce a list of the tasks completed during a specific month. Since I couldn't find a similar example through a Google search, I thought I would post my approach for the benefit of others (and as a reminder to myself!).

Below is an example org file containing completed tasks that I'll use to illustrate the approach. The tracking closed items feature has been configured to add a time-stamp when each task is transitioned to the DONE state. The header specifies a category, Foo, that org will associate with all of the tasks in the file.

#+Category: Foo

* DONE Feed the dog
   CLOSED: [2010-04-30 ]

* DONE Mow the lawn
   CLOSED: [2010-05-01 ]

* DONE Take out the trash
   CLOSED: [2010-05-20 ]

* DONE Pay the bills
   CLOSED: [2010-06-01 ]

First, configure org-mode's agenda feature and use the C-c [ command to add the example file to the agenda files list.

At this point, a list of the tasks completed in May can be produced by issuing the agenda tag matching command, C-c a m, and giving it the following match string:

CATEGORY="Foo"+TODO="DONE"+CLOSED>="[2010-05-01]"+CLOSED<="[2010-05-31]"

This should produce the following list (slightly reformatted to fit blog width):

Headlines with TAGS match: CATEGORY="Foo"+TODO="DONE"\
+CLOSED>="[2010-05-01]"+CLOSED<="[2010-05-31]"
Press `C-u r' to search again with new search string
  Foo:        DONE Mow the lawn
  Foo:        DONE Take out the trash

Although this works, entering the search string is a cumbersome task. A better solution would avoid this step.

Agenda provides a way to define custom commands that can perform searches using pre-defined match strings. The following elisp code defines a custom command that performs the above tag search automatically.

(setq org-agenda-custom-commands
  `(("F" "Closed Last Month" 
     tags (concat "CATEGORY=\"Foo\""
                  "+TODO=\"DONE\""
                  "+CLOSED>=\"[2010-05-01]\""
                  "+CLOSED<=\"[2010-05-30]\"")))

After eval-ing this command, typing C-c a F will produce the same list as above without having to enter the match string. This approach is indeed better but uses a hard-coded match string. An even better solution would generate the match string based on the current date.

Although the call to concat in the example above programatically generates the match string, it does so only when the setq is evaluated. If the setq is in an initialization file (e.g. ~/.emacs) the match string will get generated based on the date emacs was started and not the date on which the search is performed. This could produce erroneous searches when using an Emacs instance started before the turn of the month. In such cases, the setq could be manually re-evaluated to generate the correct match string but an automatic solution would be best.

Unfortunately, org doesn't currently support providing a lambda to generate the match string at search time. For instance, this example:

(setq org-agenda-custom-commands
  `(("F" "Closed Last Month" 
     tags
     (lambda ()   
       (concat "CATEGORY=\"Foo\""
               "+TODO=\"DONE\""
               "+CLOSED>=\"[2010-05-01]\""
               "+CLOSED<=\"[2010-05-30]\"")))))

produces the error message "Wrong type argument: stringp, …". Patching org-mode to support lambdas for match strings is an option but I prefer to maintain the stock org-mode code.

Thanks to the near infinite hackability of emacs, it's possible to extend the stock org mode functionality without modifying it directly. The below elisp code defines two new interactive functions that call into org-mode to perform a tag search for a specific month.

(require 'calendar)

(defun jtc-org-tasks-closed-in-month (&optional month year match-string)
  "Produces an org agenda tags view list of the tasks completed 
in the specified month and year. Month parameter expects a number 
from 1 to 12. Year parameter expects a four digit number. Defaults 
to the current month when arguments are not provided. Additional search
criteria can be provided via the optional match-string argument "
  (interactive)
  (let* ((today (calendar-current-date))
         (for-month (or month (calendar-extract-month today)))
         (for-year  (or year  (calendar-extract-year today))))
    (org-tags-view nil 
          (concat
           match-string
           (format "+CLOSED>=\"[%d-%02d-01]\"" 
                   for-year for-month)
           (format "+CLOSED<=\"[%d-%02d-%02d]\"" 
                   for-year for-month 
                   (calendar-last-day-of-month for-month for-year))))))

(defun jtc-foo-tasks-last-month ()
  "Produces an org agenda tags view list of all the tasks completed
last month with the Category Foo."
  (interactive)
  (let* ((today (calendar-current-date))
         (for-month (calendar-extract-month today))
         (for-year  (calendar-extract-year today)))
       (calendar-increment-month for-month for-year -1)
       (jtc-org-tasks-closed-in-month 
        for-month for-year "CATEGORY=\"Foo\"+TODO=\"DONE\"")))

The first function, jtc-org-tasks-closed-in-month, generates an appropriate query string and calls the internal org-mode agenda function org-tags-view. The function defaults to the current month but takes optional arguments for the desired month and year. The function also takes a match-string argument that can be used to provide additional match criteria.

The second function, jtc-foo-tasks-last-month, calculates the prior month and calls jtc-org-tasks-closed-in-month with an additional match string to limit the list to DONE tasks from the category Foo. Executing jtc-foo-tasks-last-month interactively automatically produces a list of the tasks closed in the prior month. For my purposes, this is close enough to the ideal solution. Using the optional match-string argument, I can re-use this solution to search for tasks completed in other categories or with specific tags.

My typical work flow is to archive the closed tasks after my status report is written. Org-mode's agenda makes this an easy task. First I mark all of the tasks for a bulk operation by typing m on each. Then I perform a bulk archive by typing the command B $. This will move the closed tasks to an archive file, typically a file of the same name with an added _archive suffix.

Org-mode is a great productivity tool. Combined with Emacs's hackability, it's possible to create tools optimized for your particular work flow.

Addendum

I found that searching on CLOSED date ranges didn't work in org-mode version 6.34a. The problem appears to be fixed in the 6.36c release so be sure to have the right version if you want to replicate this method.

Tuesday, May 25, 2010

Book Review: The Quants

The Quants by Scott Patterson

In The Quants, Patterson provides an intriguing account of Wall Street's most successful quantitative analysts (aka quants) and the role they played in the subprime crisis.

The first few chapters introduce the main players and provide a brief introduction to quantitative finance. Patterson begins by describing how Ed Thorp applied his mathematics background and experience pioneering Black Jack card counting techniques to invent various hedging techniques and start the first arbitrage hedge fund. From there, Patterson lightly introduces other important concepts like Brownian Motion, Random Walk Theory, Efficient Market Hypothesis, and statistical arbitrage. The introduction winds down with the October, 1987 market crash which is used as the context to introduce the "fat tail" contrary point of view personified by Benoit Mandelbrot, and Nassim Nicholas Taleb - a not so subtle foreshadow.

The "middle" of the book discusses the background, career, and substantial success of primarily five high-profile quants: Pete Muller, Ken Griffin, Cliff Asness, Boaz Weinstein, and Jim Simmons. Other Wall Street personalities are also mentioned but to a lesser extent. This part of the book more or less establishes that the above quants are very smart, and very rich.

The last part of the book provides a blow-by-blow account of the sub-prime crisis. All of the quants appeared to be caught off guard, perplexed by the market's "irrational" behavior, and unshure of how to adjust their models to prevent further losses. Throughout the ordeal, many of the quants are forced to question the very foundations of their mathematical models and prior success - was it all just luck?

Over all, I thought this book was OK but felt it tried to cover too much ground as a quantitative finance primer, homage to quants, historical account of the sub-prime crisis, and financial mystery-thriller. Since my interest lies more in the technical details, I was a disappointed with those portions of the book and uninterested in the dramatized historical account. Perhaps I simply had the wrong expectations of the book.

It's also possible that my expectations were set artificially high by Poundstone's excellent book Fortune's Formula which provides a detailed historical and technical account of the events that gave rise to the quantitative finance industry. If you're interested in this topic, then I highly recommend Fortune's Formula.

While reviewing the book to write this review, one passage caught my eye on page 250 regarding a study performed by MIT Professor Andrew Lo and his student Amir Khandani:

There was also the worry about what happened if high-frequency quant funds, which had become a central cog of the market, helping transfer risk at lightning speeds, were forced to shut down by extreme volatility. "Hedge funds can decide to withdraw liquidity at a moment's notice," they wrote, "and while this may be benign if it occurs rarely and randomly, a coordinated withdrawal of liquidity among an entire sector of hedge funds could have a disastrous consequences for the viability of the financial system if it occurs at the wrong time and in the wrong sector."

There is some evidence indicating that the withdrawal of high-frequency liquidity was a contributing factor to the May 6, 2010 flash crash. I doubt the story of the quants is over just yet.

Wednesday, April 14, 2010

Book Review: Daemon & FreedomTM

Daemon and FreedomTM by Daniel Suarez

I haven't enjoyed a science fiction, techno-thriller this much since reading Neal Stephenson's Cryptonomicon. I liked Daemon so much that I finished the sequel, FreedomTM, before I got the chance to write a review (or do anything else for that matter).

I tried a couple of times to summarize the basic plot without revealing too much but failed. So I think I'll just say that if you're into computers, AI, hacking, MMORPG's, augmented reality, sustainable technologies, and overthrowing corporate social control then you'll probably like these books. My only criticism is that they are a bit too graphic in places for my taste (mostly violence but some sex).

One of the most refreshing things about the book is that the author is an IT specialist so the technology stuff isn't too bogus. In fact, even the "far-fetched" technology in the book is actually just an exaggeration of the current state of the art.

Preview chapters are available online for both Daemon and FreedomTM if you would like to read a sample before buying. I actually listened to the audiobook version of both books via iTunes which worked out quite well - the reader's voices enhanced the overall experience, especially the Daemon's computer-generated, English-accented female voice.

One of my favorite quotes from the book was:

"Technology. It is the physical manifestation of the human will. It began with simple tools. Then came the wheel, and on it goes to this very day. Civilizations rise and fall based on technological innovation. Bronze falls to iron. Iron falls to steel. Steel falls to gunpowder. Gunpowder falls to circuitry."

I don't think there is any doubt that circuitry, more specifically digital information, is becoming the dominant source of power. Why destroy a nation when you can simply crash its infrastructure and delete its data? Daemon and FreedomTM certainly drive this point home.

The future suggested by Daemon and FreedomTM is both frightening and exciting. Although a work of fiction primarily intended to entertain, I think some valuable lessons and cautions can be drawn from the story. Good stuff.

Monday, March 29, 2010

StudyHack's Stretch Churn

Although I am no longer a student, I really enjoy reading Cal Newport's StudyHacks blog. In particular, I like its focus on achieving success through good time management, hard focus, deliberate practice, and obtaining outstanding skill.

In this post on James McLurkin, Cal discusses an interesting concept called Stretch Churn. Paraphrased from Cal's post:

  • Stretch Project: A project that requires a skill you don't have at the outset. Importantly, a stretch project is hard enough to stretch your ability but reasonable enough to be completed.
  • Stretch Churn: The number of stretch projects you complete per unit time.

The premise is that the higher your stretch churn rate the more likely you are to obtain the kind of skill required to be a leader in your chosen field. As the interview with James demonstrates, highly successful people are adept at maintaining a high stretch churn rate. I suspect this is one of the underlying attributes of Outliers.

I think the stretch churn concept is an important insight because it clarifies how to apply the deliberate practice concept in engineering and research environments. Instead of working on a single problem over a long period of time - a common approach in research - the stretch churn concept suggests that it is better to work on a series of related, hard-but-achievable projects. In a way, this strikes me as the agile development model applied to becoming a domain expert.

On a personal level, I found the stretch churn concept interesting for two reasons. First, it explains why I highly value my advanced development experience - the very nature of the work has allowed me to maintain a high stretch churn rate for years. Second, it helped me realize that if I want to become a real domain expert that I'll have to more tightly focus my stretch projects so that they build upon each other. It's a vector math problem - stretch projects in many different directions result in little change when added together.

I suspect the stretch churn concept will be a valuable addition to my self-development toolbox.

Sunday, March 21, 2010

Recovering Deleted JPEGs from a FAT File System - Part 9

Part 9 in a series of posts on recovering deleted JPEG files from a FAT file system.

A month ago (!), in part 8, we looked at the JPEG file format specification to determine if there was sufficient determinism in the on-disk layout to allow the recovery of deleted files through analyzing the residual data in the file system. The answer was mixed:

  1. GOOD: Uniquely valued markers, discoverable through data inspection, identify the beginning and type of the segments that constitute a JPEG file.
  2. GOOD: the metadata segments have a pre-defined size
  3. BAD: the length of the entropy encoded image data is, to the best of my knowledge, unspecified in the START-OF-SCAN segment header. Instead, an END-OF-IMAGE marker is used to identify the end of the entropy encoded data. The theory is that this is done to allow JPEG files to be written as the image is processed.

Essentially, this means that there is no way to determine through data inspection the length or location of the clusters containing the encoded image data. The only clue available is the END-OF-IMAGE marker at the end of the entropy encoded data.

One option is to discover and analyze latent directory entries in the data area - doing so could provide valuable clues to the start and length of erased JPEG files. The downsides to this approach are added complexity (recovering deleted directory entries) and incompleteness (directory entries for deleted JPEG files may not exist due to reuse).

A simpler approach is to inspect each cluster in the data area to see if it begins with a START-OF-IMAGE marker or contains an END-OF-IMAGE marker. Any extent of clusters bounded by START-OF-IMAGE and END-OF-IMAGE markers stands a good chance of being the data for a contiguous JPEG file - the very kind of file we've been trying to recover in this series. In this post, I'll implement this simple method and test the results. Follow the "Read more" think for the rest of the post.

Friday, March 19, 2010

Wait a moment...

The other day, I posted a lament about modern (popular) programming being mostly a matter of connecting pre-existing components or libraries with minimal work. In the post I stated that I didn't find such work satisfying.

The next day, however, I realized something - I've just spent nearly a decade in advanced development roles happily creating prototypes by modifying and combining pre-existing software components with the minimal possible work. Spot the inconsistency?

This perplexed me - why did I react this way to Mike Taylor's post when I have enjoyed such prototyping work so much?

After some thought, I concluded that my prototype work has involved modifying complex systems. This has required first understanding enough about each system's design and software to determine the minimal changes required to implement the desired functionality. So, although the eventual changes were relatively minor (100s to 10Ks LOC), along the way I had to obtain a deep understanding to complete the task. I think this is the material difference between the prototyping work I've enjoyed and the kind of "library gluing" that I dislike (and Mike of course!). If true, then there is no inconsistency after all.

Clearly the issue isn't black-and-white - the amount of work performed doesn't represent the amount of understanding required. To some degree this reminds me of the Simplicity Cycle - after a certain point, enough understanding is achieved to make the solution simpler, not more complex. From this perspective, I suspect that the satisfaction that I - and perhaps others - seek is the result of crossing that complexity-understanding threshold - this may be the "grokking" point that was easier to achieve in simpler times (e.g. 8bit programming).

Tuesday, March 16, 2010

Grokking and Modern Programming

For this blog, I try not to repost popular links from high traffic aggregation sites - odds are you've already seen them. But Mike Taylor's comments in this post so resonated with me that I felt compelled to discuss it despite the attention it has received.

While reviewing a classic Commodore64 book, Mike segues into a follow-up discussion to his What Ever Happened to Programming post in which he says:

So I think this is part of what I was bemoaning in Whatever happened …: the loss of the total control that we had over our computers back when they were small enough that everything you needed to know would fit inside your head. It’s left me with a taste for grokking systems deeply and intimately, and that tendency is probably not a good fit for most modern programming, where you really don’t have time to go in an learn, say, Hibernate or Rails in detail: you just have to have the knack of skimming through a tutorial or two and picking up enough to get the current job done, more or less. I don’t mean to denigrate that: it’s an important and valuable skill. But it’s not one that moves my soul as Deep Knowing does.

I found this comment insightful as it made me realize that I have been possibly struggling with the same thing.

For as long as I can remember, I've felt compelled to Deeply Know the systems that I work on. Starting with my own adolescent experiments programming 8bit computers, I've enjoyed diving deep into the machine to understand its fundamental operation and then using that knowledge to grok whole-system behaviors. As Mike states in the post, this was possible to do in the 8bit days due to the simple machines and books providing all of the necessary information (my tome was Your Atari Computer which still sits on my bookshelf as a reminder of those happy times).

Unfortunately, two things have happened since then - machines have gotten more complicated and having a deep understanding seems to be less valued.

The first point is obvious, computers have gotten more complicated on every level - hardware architectures, operating systems, applications, and networking. Although the fundamentals can be learned, obtaining deep expertise in any of these areas requires specialization.

Having spent most of my career working on large-scale, enterprise computing and storage systems, I've experienced the leading edge of this expanding complexity first hand. In my first job, I think I did pretty well at understanding large portions of that machine but it's been a losing battle since then. The systems that I now work with are now so numerous and complex that it's impossible to deeply understand them all - but I try.

Lately, I've been trying to do more hobby projects to have fun, and expose myself to domains outside of work. But in doing so I find that I quickly run into the second point that Mike's What Ever Happened to Programming post captures well - the hero of modern programming is the person that can stitch together libraries in the shortest amount of time in the fewest lines of code. The greatest heroes are those that can create and launch a startup in less than 24 hours while on a bus. That's great but it's not for me - I just don't find this form of programming satisfying.

I've been overly nostalgic lately, hence recent posts on clicky keyboards, old computer books, and old computer commercials. At first, I thought a looming birthday was to blame but Mike's post has me thinking that it's really a reaction to the shift in programming. The kind of work I like to do doesn't seem to be where all the "action" (read capital investment) is.

It's unreasonable to expect the world to revert back to the way things were. Therefore, there are two possible reactions:

  1. Deal with it and change with the times
  2. Pick a niche were it is possible to "grok" the system and specialize. This doesn't mean building everything from scratch but rather picking a stable domain that allows an accumulated understanding of as much of the system as I care to know.

Traditionally I've been a generalist but I must admit that option 2 seems much more attractive than 1. I guess this is something to reflect on while planning out my future career.

Friday, March 12, 2010

Keyboard Madness

Last year, I decided to finally become a competent touch-typist and better Emacs user. An unforeseen result of both decisions is that I am developing a mild keyboard obsession.

For the past couple of years I have been using a Microsoft Natural Ergonomic keyboard which has been good but lately I've felt the urge for a change - an irrational desire I'm sure.

Somewhere along the way in considering a new keyboard, I thought it would be cool to get a mechanical model. Probably due to nostalgia, the sound of a "clicky" keyboard is just one of those aesthetic things that makes me think of "real" programming and helps me enter a flow state.

Not knowing anything about mechanical keyboards, I thought it best to do some research. Through the power of Google, I found:

  • this blog post on mechanical keyboards with a video demonstration of various models.
  • this video summarizing the attributes of various mechanical switches.
  • many, many YouTube videos of people typing on various keyboards

After reviewing this material, I considered four options:

  • Kinesis Advantage Contoured: well known for its ergonomics and "cool factor". However, the high price (~$300) and unusual layout made me hesitant.
  • Unicomp Customizer 104: based on the same buckling spring technology as the renown IBM Model M and reasonably priced at $70. I decided against it mainly due to the greater key resistance.
  • Das Keyboard: a popular keyboard amongst geeks that uses Cherry Blue switches (tactile, and clicky) and priced at $130. Unfortunately the glossy case reportedly attracts an abnormal amount of dust.
  • Filco Majestouch Tenkeyless: an imported keyboard available with either Cherry Blue or Cherry Brown (tactile, no click) switches available from elitekeyboards.com for ~$120.

In the end, I went with the Filco after reading many positive reviews. While I came close to buying the brown switches, I decided that I really wanted that aesthetic "clicky" sound of the blues.

It's only been a couple of days but so far I really like the Filco. The keys feel smooth and solid - my MacBookPro keyboard just feels wimpy now. The switches are definitely clicky, this YouTube video provides a pretty good example. The only negative is that I miss the ergonomic design of the Microsoft keyboard.

I'm just using the Filco at home for now to avoid driving my cube neighbors nuts with all the clicking but if things go well perhaps I'll buy another Filco with the brown switches for work.

Join the mechanical keyboard retro-revolution!

Wednesday, March 3, 2010

The Kahn Academy

About a month ago I discovered The Kahn Academy. Since then, I've found myself spending spare moments watching the video tutorials to refresh my memory on a number of topics including finance, banking, the credit crisis, statistics, linear algebra, and physics. Although short (~10 minutes), the tutorials are very well done and a great resource.

Evidently, the creator, Salman Kahn, started out making video tutorials for a geographically distant niece. Unexpectedly, other people around the world began watching the videos and sending positive feedback. This motivated Kahn to turn the effort into a non-profit venture dedicated to providing high-quality, free educational materials. An inspiring story.

Also impressive is Kahn's simple approach of drawing diagrams on a "black board" in real-time - very similar to back-of-the-napkin visual brainstorming. Like all good teachers, Kahn's deep understanding allows him to discuss complex topics in a readily understandable manner.

If you've got a spare moment, I encourage you to check out a video.

Tuesday, February 23, 2010

How the World Will Try to Stop You

The other day, I watched a recorded lecture on Google Video by University of Waterloo Economics Professor Larry Smith entitled "How the World Will Try to Stop You and Your Idea". The lecture was given to students interested in entrepreneurship with the goal of advising them on how to overcome resistance.

The points that I took away from the lecture were:

  • Entrepreneurs, especially young ones, are often given mixed messages. They are encouraged to change the world but when they try to are told that they won't be successful.
  • The vast majority of people are busy beyond belief. While it may seem like most are incapable of substantive thought, the fact of the matter is that they are simply too busy to do so.
  • Because they are too busy, most people rarely listen closely. This means that their feedback is very superficial and shouldn't be taken seriously.
  • Common criticisms are "it's been tried before" and "it won't work". It's important to counter these comments with probative questions like "when was it tried?" and "why not?". A lack of response indicates a baseless criticism. The rare factual response may provide useful information to refine the idea.
  • Young entrepreneurs are often told to get more experience, earn their "spurs", wait their turn, and suffer a few failures before starting a venture. This implies that entrepreneurs should get more degrees, work in a big company for 15 years, and wait until their forties to start a company. This is nonsense, history is full of examples of young, inexperienced people changing the world.
  • Instead of fighting resistance, avoid it. Don't tell anyone what you are really up to. Keep your own counsel and only tell people what they need to know. The best way to change the world is to sneak up on it.
  • Ultimately, courage is needed to ignore negative feedback. Unfortunately, there is no easy way to acquire it.

Regrettably, one of the two cameras used to record the lecture had a faulty audio connection. As a result, parts of the recording are not understandable. It's a shame that the audio from the working camera wasn't dubbed into the recording to fix the problem. Regardless, the talk is still worth listening to for the audio that is understandable.

I found many of Professor Smith's comments insightful and in-line with my experiences. For example, I've spent the majority of my career in advanced development and have heard the "it won't work" and "it's been tried before" feedback almost daily. Professor Smith's talk helped me realize that I now reflexively ask the "why" and "when" follow up questions. As he indicated, the criticism is often either outdated or baseless.

I also often run into the too-busy-to-think dilemma and presently spend the majority of my time supporting long-running campaigns to gradually get people to recognize and embrace innovative opportunities. These campaigns can be a lengthy process but the alternative is rash decisions based on limited thought which I think causes more harm than good.

Although I don't like to admit it, I have allowed myself to fall victim to the "need more experience" feedback more than once - sometimes it was even self-generated after working alongside extremely talented colleagues. The result, I now have multiple degrees and 14 years experience working in a large company. The good news is that I am approaching my forties so perhaps "my time" is nearing! Humor aside, I now recognize that focus and persistence are often more important for success than experience. That said, I strongly believe in continuous self-development and am always eager to obtain new knowledge and experiences. I think it's a matter of keeping a healthy balance between humility and hubris.

In summary, I really enjoyed this lecture. Professor Smith is clearly very thoughtful about many topics and passionate about helping students - both were inspiring to watch. If you have the time, I recommend seeing it for yourself.

A few years ago, I had a similar experience with another of Professor Smith's recorded lectures on the potential of expert systems. That lecture made a significant impression on me and continues to factor into my long-term career goals. I'm presently reading Professor Smith's book on the topic, Beyond the Internet: how expert systems will truly transform business, and look forward to writing its review in a future post.

In the immortal words of Guy Kawasaki, don't let the bozos grind you down!

Thursday, February 18, 2010

Recovering Deleted JPEGs from a FAT File System - Part 8

Part 8 in a series of posts on recovering deleted JPEG files from a FAT file system.

In part 7, I demonstrated recovering deleted JPEG files through knowing their pre-deletion location in a FAT file system. In the real use-case of recovering accidentally deleted files, the locations are unknown making this approach impossible.

Recovering deleted files without knowing their location requires a method to find them within the unerased data. In this post, I'll show how the structure of a JPEG file can be used to do just that. Follow the read more link for the full discussion.

Thursday, February 11, 2010

The Doctor's Computer

Who knew, The Doctor prefers Prime Computers. I love these.

Book Review: The Tinkertoy Computer

The Tinkertoy Computer by A.K. Dewdney.

After reading The Planiverse, I picked up a couple of Dewdney's other books. Published in 1993, this book is a compilation of some of his articles from the periodicals Scientific American and Algorithm.

The book contains 23 chapters organized into four themes. Of them, the following were my favorites:

Theme One: Matter Computes

  • Chapter 1: The Tinkertoy computer - an account of a tinkertoy computer built by Daniel Hillis and others at MIT designed to play Tic-Tac-Toe.
  • Chapter 2: The Rope-and-Pulley Wonder - methods for building boolean logic blocks out of ropes and pulleys.
  • Chapter 6: Dance of the Tur-mites - Turing machines and cellular automata.

Theme Two: Matter Misbehaves

  • Chapter 8: Star Trek Dynamics - implementation details of a 1970s era computer game.
  • Chapter 9: Weather in a Jar - the Lorenz attractor and simple models demonstrating the behavior.
  • Chapter 11: Designer Fractals - iterated function systems and their use for creating fractal patterns.

Theme Three: Mathematics Matters

  • Chapter 13: Mathematical Morsels - a summary of mathematical puzzles by Ross Honsberger.
  • Chapter 14: Golygon City - an introduction to golygons.
  • Chapter 15: Scanning the Cat - a simple algorithm for reconstructing 2-D images from 1-D shadows.
  • Chapter 16: Rigid Thinking - rigidity theory, and flexible nonconvex surfaces.
  • Chapter 17: Automated Math - an algorithm for determining the rules for numerical sequences.

Theme Four: Computers Create

  • Chapter 19: Chaos in A Major - the use of logistic maps to create chaotic music.
  • Chapter 23: Latticeworks by Hand - algorithms for creating lattices.

As you can tell, I more or less enjoyed the entire book. This wasn't surprising given that I bought the book because it touched upon a number of topics that I find very interesting: complexity theory, fractals, math puzzles, physics simulations, geometric patterns, and algorithms.

I was, however, pleasantly surprised by the nostalgia invoked by the simple programs presented in the book. The short, simple BASIC programs reminded me of the many, many hours I spent programming 8bit computers during my adolescence. I still recall the feeling of awe that resulted from seeing simple programs like these produce seemingly "magical" results. Computing was much simpler then but no less rewarding - hopefully short but powerful programs like these aren't becoming a lost art.

Saturday, February 6, 2010

Dynabook reflections

I'm sure like many others, the recent iPad excitement has led me to think more about Alan Kay's Dynabook concept.

Some think the Dynabook is just a highly portable computer such as a conventional laptop or tablet computer. Kay, however, had a much grander vision - the Dynabook was to be an easy to use and program "instrument" that would allow "children of all ages" to learn experientially through simulation.

Kay and Goldberg's 1977 article, Personal Dynamic Media, briefly discusses the Dynabook as a learning device and their successful results from letting children use the "Interim Dynabook" - otherwise known as the Xerox PARC Alto. Kay explores this topic in greater depth in his talk Doing With Images Makes Symbols. Howard Rheingold discusses the Dynabook's history and potential as a "fantasy amplifier" in greater depth in Chapter 11 of his book Tools for Thought (a great book, available for free online here).

Kay conceived of the Dynabook in 1968 based on a number of influences including:

In his talk at the Computer History Museum's 40th Anniversary of the Dynabook event, Kay does a great job of describing how these works influenced his thinking.

Kay's pursuit of the Dynabook produced many significant innovations like Object Oriented Programming and the overlapping window graphical user interface. It has also been the motivation behind his support of efforts like the One Laptop Per Child program, Squeak project, and EToys learning environment.

Modern technology now makes it possible to build portable computers that strongly resemble the Dynabook's physical form - the iPad is a great example. However, I think little progress has been made towards creating software that realizes the fantasy amplifier vision.

Kay is famous for saying that the computer revolution hasn't happened yet. I tend to agree - current computing seems almost primitive compared to the work cited above. I suspect the vast majority of computers are used today as mechanistic productivity enhancers (office apps), communicators of trivialities (tweets?), and sensory overloading distractions (games, video, etc).

The iPad will certainly be used for the same purposes - in fact Job's specifically profiled these use-cases in his keynote speech. However I think the iPad has the greatest potential in the education space. Transforming textbooks into eBooks is only the first step - the next is to add interactive learning aids like simulations for exploratory experimentation. If (when?) this happens, the Dynabook will finally have come to life.

Perhaps it's time to download the iPad SDK, watch Stanford's iPhone Development class on iTunes, and lend a hand in making the revolution happen.

Thursday, January 28, 2010

Recovering Deleted JPEGs from a FAT File System - Part 7

Part 7 in a series of posts on recovering deleted JPEG files from a FAT file system.

After a hiatus due to the holidays, personal matters, and work-stuff I'm ready to continue the FAT Recover project. In this post I'll

  • finally demonstrate the two principal assumptions that this project is based on.
  • actually recover deleted files using a manual approach.

Follow the "read more" link for the detailed discussion.

But first a mea culpa - I've discovered that the code for long file name support in post 5 is utterly broken. The code works for files that only use a single long file name entry but does not correctly process file names spanning multiple entries. For now, I'll side-step the issue and post a fix at a later time. I'll also update post 5 to warn future readers. Apologies for the error - that's the danger of hacking in the wee hours and minimal unit testing.

Wednesday, January 20, 2010

Book Review: The Simplicity Cycle

A couple of weeks ago I came across a link to the book The Simplicity Cycle by Dan Ward. It's available as a free download so I grabbed a copy and finally got the chance to read it.

The book begins with the argument that projects follow a curve in the two-dimensional space formed by the qualities "goodness" and "complexity". Initially, all projects start off at the origin with no complexity or goodness. As they progress, complexity and goodness increase together as the initial problems are understood and suitable solutions created.

The book's primary thesis is that eventually all projects reach an inflection point where further increases in complexity yield less goodness, not more. To improve goodness beyond this point, the focus must switch from increasing complexity to reducing it. These transitions from simple, to complex, and back to simple again form the "simplicity cycle" which, when successful, results in "an elegant, graceful, and streamlined solution" capable of representing "tremendous complexity" while being "at once profound and breathtakingly simple". Inspiring stuff.

Not surprisingly, the author asserts that increasing complexity after the inflection point results in an undesirable outcome. For software, this could mean an application with too many features, options, or ways to accomplish tasks. For decision making, this could mean having too much information to draw an effective conclusion.

Unfortunately, the non-linear relationship between complexity and goodness can go unrecognized - from page 26:

One pitfall that designers, engineers, and academicians may fall prey to is the belief that continuing to increase complexity … will continue to produce increases in goodness. It just isn't the case.

In some cases, this mistaken belief can be due to a fascination with complexity itself - here goodness doesn't mean utility to the consumer but rather intellectual gratification to the producer. Also quoting from page 26:

This is genesis-gone-wild, cancerous growth. It is the product of “over-learning” and smugness, where people fall in love with complexity.

In other cases, the complexity is mastered but without achieving the deep insight needed to reduce the subject to its fundamental problems and solutions - to my mind a kind of mechanistic expertise. Quoting from page 30:

It generally means we are over-thinking a problem or over- engineering a solution. Think of it as achieving “the complexity on the other side of understanding,”

Here we find the learned academician who everyone assumes is brilliant because nobody can understand a word he says. In fact, his academics may simply be complicated and have very limited goodness. All those extra squiggles are not very useful, no matter how complex.

The book admits the difficulty of knowing when the inflection point has been reached. Guessing too late results in over complicatedness as has been discussed. However, guessing too soon results in premature optimization - an equally bad outcome as it fails to reach optimal goodness. Unfortunately, the author asserts that a method for reliably identifying the inflection point doesn't exist.

The book makes the important point that the complexity phase cannot simply be skipped. Quoting from on page 46:

However, the simplicity in this region is built upon an essential foundation of earlier complexity. We can’t just jump from simplistic to simple, skipping the complex entirely. The initial increase in complexity is as crucial to maximizing goodness as the later decrease in complexity.

The book concludes with examples of good and bath paths through the complexity vs. goodness space. I found these helpful for reflecting on how the simplicity cycle relates to past experiences.

This book was a pleasant surprise. Although short and free, I found it thought provoking and insightful. In particular, the "simplicity cycle" concept matches well with my experiences. When developing software, I often find that after writing a fair amount of code common patterns emerge that allow for a simpler design. Conversely, I've witnessed first hand the disastrous outcome of projects that fail to simplify after the inflection point. Although these experiences have given me a healthy fear of complexity, this book provided a context for that fear and a framework for managing it.

My only criticism of the simplicity cycle concept is that it may encourage the kind of over generalization that Joel Spolsky warns against in his essay "Don't Let Architecture Astronauts Scare You". Quoting from Joel's essay:

When you go too far up, abstraction-wise, you run out of oxygen. Sometimes smart thinkers just don't know when to stop, and they create these absurd, all-encompassing, high-level pictures of the universe that are all good and fine, but don't actually mean anything at all.

These are the people I call Architecture Astronauts. It's very hard to get them to write code or design programs, because they won't stop thinking about Architecture. They're astronauts because they are above the oxygen level, I don't know how they're breathing.

The point I take away from this is that simplification is not the same thing as abstraction - simplification remains focused and actionable while abstraction becomes vague and unactionable. Unfortunately, I think many mistake generalization for simplification but now that I'm consciously aware of the difference I'll be less likely to fall into the trap myself.

Saturday, January 16, 2010

New Year, New Role

With the New Year I took on a new position at work - same kind of role leading an advanced development team but with a much expanded sphere of responsibility. While exciting, this change plus some personal matters has left less time for blogging.

I expect to recover my blogging time this week and plan to finish up the FAT Recover series by the end of the month.

The bright side to this interruption is that it helped me realize how much I enjoy blogging. According to Google Analytics, this blog isn't setting the web on fire but regardless the personal benefits have more than justified the effort.

I have a lot of plans for this blog in 2010 and I intend to set aside the time required to remain active. That said, family and work take higher priority so such interruptions are probably inevitable - who knows what the year will bring.

Wednesday, January 6, 2010

Book Review: Organizing Genius

Organizing Genius: The Secrets of Creative Collaboration by Warren Bennis & Patricia Ward Biederman

I first read this book a few of years ago and recently reviewed it in preparation for a new role at work. As before, I found it to be an extremely insightful resource on the attributes of great, innovative teams.

The majority of the book provides accounts of the following "great" groups:

  • The Walt Disney Studio
  • Xerox PARC
  • The Apple Macintosh Team
  • The 1992 Clinton Campaign Team
  • Lockheed's Skunk Works Group
  • Black Mountain College
  • The Manhattan Project

As stated in the introduction, the authors attempt to systematically examine these groups in the hope of identifying their "collective magic". In the final chapter they summarize their findings into the following 15 points.

  1. Greatness starts with superb people.
  2. Great Groups and great leaders create each other.
  3. Every Great Group has a strong leader.
  4. The leaders of Great Groups love talent and know where to find it.
  5. Great Groups are full of talented people who can work together.
  6. Great Groups think they are on a mission from God.
  7. Every Great Group is an island - but an island with a bridge to the mainland.
  8. Great Groups see themselves as underdogs.
  9. Great Groups always have an enemy.
  10. People in Great Groups have blinders on.
  11. Great Groups are optimistic, not realistic.
  12. In Great Groups the right person has the right job.
  13. The Leaders of Great Groups give them what they need and free them from the rest.
  14. Great Groups ship.
  15. Great work is its own reward.

Generally speaking I agree with all of these points. The closest I've come to working in a Great Group was my first job building a big-iron ccNUMA machine. That group and experience had many of the elements listed above and a decade later I recognize how rare of an opportunity that was. I suspect others from the team feel the same way based on the reminiscing we do whenever any of us cross paths.

Clearly the leaders of Great Groups are responsible for creating an appropriate environment for the team to flourish. In my role as the leader of an advanced development engineering team I've tried my best to follow this guidance with I think moderate success. As my role and responsibility expands I need to place greater focus on creating the right environment - to that end this book will be a valuable resource.