Technical ramblings
Thursday, February 15, 2007
  Moving
I've moved over to Chaos In Motion with a new blog at Development Chaos Theory.
 
Thursday, December 14, 2006
  I hate wizards.
No, not the Harry Potter variety. No; I cannot stand computer 'wizards'; essentially a sequence of prompts obtained from a user which guides him through a list of choices.

Now generally you have two types of computer interactions in a GUI centric environment. The first we could losely call the "document-centric" or "editor-centric" interaction, which involves an application with one or more windows, each window displaying a representation of one or more documents. Your web browser is an excellent example of a "document-centric" application that doesn't (generally) involve saving the contents but only browsing them, and where the source of the "document" is not your local hard disk.

The second type of interaction is the "settings" or "control panel" interaction, where you have a list of specific settings that you need to modify. The difference here is generally you only interact with one window containing information concerning a single set of environmental settings. If you are within an application, generally this is represented using a modal dialog box. If the primary purpose of your application is to act as a control panel, your application usually only presents a single window.

A 'wizard' represents an even worse beast than an application modal dialog box. It's a modal dialog box with a series of fixed modes which guide the user--and that's bad: the computer program takes control from the user, rather than allows the user to keep control. The worst of this breed is the "installer" wizard: the user wants to install something and the computer, rather than obliging or at least telling the user "you have some options you may want to think about", instead interrogates the user with twenty questions, of which he'll probably get nineteen of them wrong. After all, unlike the developers or the testers, the poor end user of your application who just installed it for the first time has no friggin' idea what the options mean. (I worked on an application once where the installer literally queried you for twelve or thirteen settings prior to installing--half of which you couldn't possibly understand the ramifications until after the application's help system was installed.)

Wizards need to die!

Anything that can be done with a sequence of modal dialog boxes giving the user the third degree can also be handled with a control panel. For example with an installer just put up two screens: the obligatory "EULA" which the user won't read but at least makes legal happy, then put up an "options" control panel which uses tabs to bury the more obscure settings ("install with warp core preheat active?") so that the user can just say "yeah, install this, and give me all the options", and be up and running.

Which also reminds me: why do installers ask the user for anything beyond "where do you want me to put this?" and "do you want me to unpack all the options now?" An installer should be like a moving service or a moving company: polite, quiet, efficient, and not constantly pestering the poor user with questions the user probably dosn't know the answer to. ("I think the couch would look better over there. What do you think?") And unlike the moving company you should be able to put the installer disk back in the computer and unload or load more options to your program after the initial install process--and not have to wade through a constant barrage of questions. ("Why are you doing this? What do you want? How much do you want? What is the air speed to wing span ratio of a fully laden swallow?")
 
Wednesday, January 18, 2006
  If only good people wrote code.
It happens here that the really good developers find themselves stuck fixing bugs that were created by poor programmers. You'd think the ideal would be the other way around: good programmers writing great code, and the newbies spend their time diagnosing issues in that code on the rare occassion they surface.

However, scheduling pressures cause the good programmers (who have better diagnostic skills than the poor programmers) to be yanked off of development tasks to diagnose emergency problems out in the field, while the underutilized poor programmers, being heads in management's head count, are put on the task of writing next generation software.

Thus the cycle of crappy software and discouraged senior programmers propagate...
 
Tuesday, January 17, 2006
  Engineering Big Systems
So far I can think of several components of the project I'm working on, where I've complained about how the code was over-engineered and fragile--where the original author of the bad code got promoted and moved on, and I got stuck with the crap...

It's really rather sad.

So here's some quick notes on how to build a large system.

First, a big system is not a bigger "small" system. Instead, a big system is simply a lot of small systems strung together.

Because a big system is simply a bunch of small systems strung together, it's important that each small system be engineered to be as simple to understand as possible. This means each small system should be easy to follow and deterministic.

For example, today's problem I'm dealing with is failover: if one box goes down, the software I'm working on is supposed to connect to a second box to work. Now failover is conseptually easy: an array of boxes to connect to, and a small engine which attempts to connect to each until it makes a successful connection. You also need a background thread that determines if a connection can be re-established to the primary, and some way to force the currently open connections to fail, so they re-run the connection loop of trying each machine until you get a successful connection. So ideally you're talking about (1) an array of machines, (2) a connection factory that walks the array to find a machine that's up, (3) a wrapper that wraps the connection object that permits you to forcefully break an established connection (for failback), (4) some way to read the list of machines that are available to connect to, and (5) some means to detect if a connection exception thrown by your connection wrapper is either a failover event ("transaction failed: machine is down") verses a non-failover event ("transaction failed: illegal transaction").

Conceptually this is five moving parts: five simple components strung together into a small system. It's deterministic: part (2) walks array (1) in order, so when something fails, you can watch it go "connect to A? No. Connect to B? No. Connect to C? Yes" repeatedly.

So, is our failover code that simple? Oh, good Lord no; instead of being about 5 or 6 Java classes to implement, our current failover implementation runs almost 50 classes, consisting of at least a half-dozen different design models which make no sense in the context of the software. It's not deterministic: multiple threads run asynchronously doing house keeping in odd ways which make no sense. And worse, rather than providing a simple wrapper to our underlying connection object, the connection object itself was engineered with another dozen or so classes which implement parts of the failover logic--leftover logic that came from several redesigns that were never cleaned up after.

Rather than an array, a loop and a few conditional statements, several thousand lines of code sit in it's place--several thousand lines of code which turn out to be extremely fragile.


Second, as a big system is a bunch of small systems strung together, each small building block needs to be defensively programmed, so other building blocks which are not as well coded don't bring the system down.

In the system I'm working on, once failover fails, we get a cascading series of failures which result in the system dropping into an inoperative state somewhat randomly. The system is supposed to fail gracefully into a less-functional (but still running) state. Instead, the system degrades until three days later, the OS runs out of resources and the system collapses. Now if failover had been easily engineered, a leak in resources would be easy to find: there is just one array, one loop, some logic to debug. But no; because the thing is an over-engineered piece of crap, it's impossible to figure out what failed.

Worse, because other elements of the system are not defensively built, other components of the system (event logging, alerting, etc.) all start failing at different rates: they either expect a successful connection, or an explicit failure.


Third, and this is where the art of development comes into play and replaces the "engineering" aspects of development, the small components need to be designed around the functional components necessary to get the big tasks done. In our system, for example, our core architecture sits on top of Tomcat. Each component that communicates with the outside world runs in it's own servlet, but on top of a common servlet base which provides common services beyond those provided by Tomcat. (Things like system metrics, for example.) On the blackboard, one needs to be able to draw the software, draw the blocks that go into the software, then break out each block and break those into smaller blocks. It's the "fractle" way of designing software, and because so much of that is art, often you wind up having to do several designs before you conseptualize the software correctly.

(That's why sometimes it's best to design a system by writing it once, throwing the result away, then writing it again. The first time was practice to gain experience with the components of the system you will eventually need.)


Oddly enough our software is easy to draw into block diagrams. But for some reason, our software is not written into modules that follow the block diagrams. Which means the entire thing is a tangled mess.


It's so simple to engineer a top-notch system: keep the components simple. I don't understand why people insist on creating crap instead--though oddly enough, many of my co-workers, even when they agree with the principle of keeping it simple, go off and engage in bad programming practices anyways.
 
Wednesday, November 23, 2005
  Selling Out
Many years ago I was a freelance computer consultant--a programmer for hire for short-term projects. And I was quite successful; for the nine years I freelanced, I managed to buy a house in a fairly tight market.

For those who want to freelance, a few words of wisdom. First, your salary will be irregular. And I don't mean "the paycheck may show up a week or two late" irregular. I mean "you may not get paid for four months while you're looking for a new project" irregular. And that's not an exaggeration: there were several periods over the nine years where I went for several months without ever seeing a check.

So first and foremost, if you decide to consult, have at least six months stashed away somewhere. If you are currently living month to month from paycheck to paycheck, then do not go into consulting until you sort out your finances. I do not think I can stress this enough. Having a second income may also help; my parents leveraged themselves into the home building industry by my father working full time while my mother worked as an architectual designer. After a decade, her work was solid enough that my father was finally able to leave his full-time job.

Second, as my parent's example illustrates, it takes a very long time to get established. Freelance work is not something that happens overnight, though sometimes it can help to take a project with you to get started.

Third, don't expect to do freelance work and have more time left over. If you aren't working like a dog making your client happy (because, honestly, you will be hired because your client is an idiot and has a disaster that you need to solve--so he's also an idiot as to how much slack to give you before he fires you), you're working like a dog finding another client to make happy.


Well, three years ago I reached a turning point where I wanted to work on larger projects--so I took a job at Symantec so I can learn more about management. And a funny thing happened.

I found myself selling out.

Today I get paid six figures--and it's a reliable six figures. My salary is no longer a roller-coaster ride from feast one year to famine the next.

Further, I find myself working just as hard--I'm a bit of a workaholic--but rather than dealing with a client who doesn't know if I'm working enough, I'm now dealing with a boss who cannot be more estatic with my level of work. I routinely get extra bonuses (about two "A+ awards" per year) for my level of participation, along with the occassional stock option grant (which is worthless because of today's stock value, but the thought is nice) and the like. The people I work with are nice--and I've been around long enough to develop a better friendship with many of them than I would have if I breezed out of there after a few months.

And I'm starting to learn the organization there.


Yes, I've sold out. But the reality is that I'm getting paid more there than I was as a consultant, I'm working less, it's less stressful because my level of participation is well appreciated. And I couldn't be happier overall with my work situation.

I don't think I'll be going back to consulting anytime in the near future.
 
Tuesday, November 15, 2005
  Non Player Characters
So there is this "innovations" meeting--a sort of "rah rah" meeting here at Symantec--discussing how the CTO is dealing with promoting innovation. He defined innovation as creativity that actually changes the user experience. That is, it's not good just to invent a better mouse trap; you need to invent one that the user wants and will buy--or at least one which will change how the user catches mice. A better mouse trap isn't good enough; it's got to be something the user uses.

So he discusses different ways to promote innovation--and it's all the same platitudes I've heard forever at various companies: "management did this", "management did that", "the sales engineer managers got together", "the architects got together"--combined with "and we have to figure out a better way of communicating the customer facing problems down to the engineers so they can factor this in when they innovate."

It's all a top-down model. So silly me, I stuck my hand up and asked "what about bottom-up innovation and cross-pollination?" combined with "often the customer doesn't know what they really want"--which led to a rather interesting response about "how do we balance the needs to tell a consistent story" with "how do we get better insight into the expertise developed in this company."


But I think there is a bigger problem here that just struck me. Our monkey brains have a finite number of people we can actually deal with. To our monkey brains, beyond a certain number of people, and we are simply incapable of treating them as human beings.

So with a large company the fundamental problem exists to upper management that the thousands of workers cannot be conseptually seen as anything other than cogs in a very large machine. Now this wouldn't be such a problem if there wasn't the second problem--which is that in everyone's desire for upward mobility, most of us focus upwards: that is, we are more likely to see the people above us as real as we wish to join their ranks--and we see the people at the same level as us and those below us as, well, as non-player characters.

This becomes a fundamental problem. Individuals who have something positive to contribute basically are discouraged by the triple pressures of upward competition, their own non-player status in the eyes of management, and the desire for their immediate superiors to preserve their own jobs. So innovation gets squashed--unless it is top down.

It's just human nature.
 
Wednesday, November 09, 2005
  User Interface Annoyance
Okay, so here's a stupid question. How many of you hate when an application starts up or opens a new window, the focus is taken away from the current window that you may happen to be using? That is, you start up an application that takes a few seconds to start up, so you switch to another, start typing in your password (say)--then suddenly half your password is in the URL bar of a browser window?

Here's what I don't get. This is an easy thing to fix: if the user starts an application up, set a flag to indicate that it's the current frontmost application. But if the user then switches to another application, force the front window of the application being launched to just behind the current window.

Simple. Since each window has to be associated with the current application in the window manager of every windowing operating system out there, it's easy to detect if the user--after starting an application--then does something that sends events to the current focus of a currently running application. If such an event (a keyboard or mouse event) occurs, then change the current frontmost application flag to the application you just screwed around with. Then when an application wants to open a front-most window, have that window appear just behind the window the user is currently using.

Of course there should be some exceptions: an alert that pops up probably should go frontmost even if the application is not--but such notifications should be reserved for something disasterous or for a user-set alarm.

But by and large, it's a pain in the neck every time an application opens up a new window, that process grabs focus from the user. This makes the entire operation no longer a user-centric operation, but a computer centric one.

And anything that is not user centric is just fucking annoying.
 
... where our hero, embedded in the computer industry, rambles on about software development issues which catches his eye or (more likely) annoys the hell out of him...

Name:
Location: Glendale, California, United States

I'm your humble host, a resident of Southern California, an ornery conservative in a liberal land, a software developer who also likes to do woodworking and cook.

Archives
July 2005 / August 2005 / November 2005 / January 2006 / December 2006 / February 2007 /


Powered by Blogger

Subscribe to
Posts [Atom]