This site will look much better in a browser that supports web standards, but is accessible to any browser or Internet device.

Anomaly ~ G. Wade Johnson Anomaly Home G. Wade Home

August 27, 2005

Review of The Best Software Writing I

The Best Software Writing I
Joel Spolsky (editor)
Apress, 2005

Despite plenty of examples to the contrary, there is actually some good writing out there on various software subjects. Even though most of what you can read on the subject (on and off the web) is not particularly well-written, there are some gems out there. In this book, Joel Spolsky attempts to show how some good examples would look.

Joel and the team at Apress have collected together 29 articles from various authors writing on software subjects. These essays show the qualities that Joel believes embody good writing. Joel introduces each essay in his own ... unique style. Although I have disagreed with some of Joel's arguments in the past, I have to admit that he did a great job of finding well-written articles.

The topics range from technical articles on programming topics to the business of software to the human side of the business, and many points in between. As with any diverse body of work you may find, as I did, that you don't agree with some of the positions taken by the authors. However, each article was so well written that even if you don't fully agree with the author's position, you will still learn quite a bit by reading it. With so many different authors, you are also exposed to a large number of styles, from cartoons to essays. But each one is easy and enjoyable to read.

If you are looking for purely technical articles, containing nothing but language tricks and programmign techniques, you might want to take a pass on this book. Many of the articles cover some of the non-technical aspects of software development. In many ways, that caused me to enjoy the book more. I have loads of books on the technical side of software development, but good articles that help us understand the business side of things seem to be more rare.

In addition, I found a couple of the articles introduced me to new weblogs that I plan to follow for some time to come. (Thanks, Joel. As if I needed to lose any more time to extraneous reading.<grin/>)

Unlike most of the books I review here, this book makes a pretty good book to unwind with. Instead of forcing more technical material into your brain, take a look at this one and give yourself some time to relax.

Posted by GWade at 05:34 PM. Email comments

Upgraded MT

After using the 2.65 version of MovableType for over a year, I finally decided to upgrade. I saw there were plugins that support filtering comment and trackback spam, and I'd like to restore those features to my site.

Of course, as soon as I got started with the upgrade a new version of MT comes out with possibly better protection. I'll wait a little while before upgrading again.

Anyway, starting with this entry, I'll go back to allowing comments. If this survives without much trouble, I'll re-enable comments throughout my blog.

Posted by GWade at 05:26 PM. Email comments

August 23, 2005

Unintuitive Multithreading: Waiting for Performance

This essay continues my exploration of misunderstandings about multithreading. In the first essay of this series, Unintuitive Multithreading: Speed, I explained that multithreading does not inherently speed up a process. In this essay, I plan to show how to not achieve more performance from a multithreaded system.

Many new multithreading (MT) developers make the same mistake when trying to speed up their first MT application. Assume a new MT programmer named George has just rewritten his program to use threads and finds that the performance is not much better than it was before threads. What is he going to do? George digs through the documentation for his threading library and finds that something called 'priority' determines which threads get executed before other threads. Obviously, he just needs to raise the priority of his most important thread and all will be well.

Unfortunately, George finds that this does not work as he expects. Sometimes his code executes a little faster, but sometimes it executes a lot slower. Once or twice it even locks up and has to be terminated externally. Like many first-timers, George immediately begins tinkering with the priorities of other threads. Each change results in similar inconsistent results. What could he possibly do to fix the problem?

Although George is not a real person, the example follows a progression I've seen several times. The original problem was either that the program was not well-suited to multithreading or George made a mistake partitioning the code into threads. Neither of these would be fixed by changing priorities.

After dealing with this problem time and time again, I've come to the conclusion that the first rule of thread priority is:

1. Don't change the priority of your threads.

Two likely results of changing thread priorities are priority inversion and thread starvation. Both occur because a thread has a higher priority than it should in the context of the surrounding program. This leads directly to the second rule of thread priority:

2. If you are absolutely sure that you know what you are doing and still want to raise a thread's priority, you are wrong.

If you think this rule is harsh, you haven't dealt with as many well-meaning, over-confident Georges as I have. In actuality, this rule is similar to the rule about avoiding hand-optimizing code. Modern compilers do a much better job of tweaks to improve the run-time of code than most programmers ever can hope to. Likewise, many modern threading systems perform minor tweaks to a thread's priority as needed. If you mess with a thread's priority, you will probably defeat that optimization.

The problem is that programmers are, in general, no better at recognizing priority issues than they are at recognizing the 10% of their code that consumes most of the running time. To make this point, I'll use an example we used to teach in one of my programming classes to explain this. Let's say I have a program that consists of three threads. All three threads must complete before the program is complete. At a high level, the threads are

  • Thread A does pure computations. It is just dependent on the CPU.
  • Thread B is writing data to the hard disk.
  • Thread C is writing reports directly to an old printer.

This is obviously contrived, but humor me so we can figure out how to set the priorities for these threads. Now George looks at this problem and decides that thread A is doing the most work and requires the CPU to do it, so it should have the highest priority. The printer will take forever to write anything, so he decides that thread C should be the lowest priority. That way no other thread will be waiting on it. That puts thread B as with the middle priority.

Once again, George has chosen his priorities in a way that makes some sense, but does not work well for a multithreading problem. In this particular case, we will generate the worst possible runtime. Thread A will run to completion with the least number of interruptions (and context switches). Then the program will spend a lot of time waiting and doing nothing as each of the other two threads slowly send out their data. Remember that the goal was to get all of the threads to complete as quickly as possible, not to get the calculation over with as fast as possible.

The key to this problem is to make the thread that does the least work in between blocking calls the highest priority. Multithreaded apps that make use of the hurry up and wait principle are often the best performers. So thread C needs to send individual bytes down to the printer and wait a long time for the printer to do it's thing. So we want to start on this task as soon as possible every time we can write. Thread B can write more to disk faster than our printer thread, so it should be lower priority than thread C. That way if both are ready to work, C will get a chance to start first and go back to waiting.

The lowest priority is now thread A. It will now be interrupted more and will therefore have a longer run-time. But, thread A's time will be interleaved with the waiting for the other two threads. So the overall performance is better. This is one of the surprising effects that makes MT worthwhile even though it slows down CPU-bound threads.

The bad news is that most MT problems are not this easy to characterize, so there may not be a clear-cut answer. The threading system can monitor the performance dynamically and optimize better than we could. This shows once again that we are better off leaving priorities to the threading system.

The third rule of thread priority is likely to confuse more people than the previous two combined.

3. If you must change a thread's priority, lower it to speed up the code.

Most modern threading systems will automatically raise the priority of a thread that spends most of its time waiting. They will also lower the priority of a thread that uses all of its time-slice. If your system does not do this, consider lowering the priority of any CPU-bound threads by a small amount to allow blocked threads to get run and go back to waiting. It is often surprising how this can produce better performance.

This will only really work if the MT code is structured correctly. We'll look into that issue a little more next time.

Posted by GWade at 10:46 PM. Email comments

August 20, 2005

Unintuitive Multithreading: Speed

This begins a short series of essays on what most programmers get wrong about multithreading. Over and over again, I've seen programmers make the same mistakes in multithreaded applications on multiple operating systems and in multiple languages. So I decided to give you the benefits of what little insight I have into the problem.

The first problem most programmers have with multithreading is extremely fundamental: multithreading an application does not speed it up. In fact, the decision to multithread a program is rarely about raw speed. I expect that some of the people reading this are going to decide that I have no idea what I'm talking about at this point. However, I can back up this statement.

Given a single CPU machine and process that is CPU-bound, adding threads must slow down the program. In addition to the work that we were already doing, we now have to spend extra time doing context switches. Therefore, for the CPU-bound program, threading is guaranteed to slow down the program. (Of course, this only applies to preemptive multithreading.)

If raw speed is not the issue, why would we multithread a program? On a single CPU system, there are only two real reasons to use threads.

  • responsiveness
  • resources that block

Responsiveness

The main reason that preemptive multitasking has become popular in recent years is responsiveness. One process or thread should not be able to lock up the whole machine when the user wants access. The average user does not care if some process running in the background takes a few seconds longer to run as long as the computer responds immediately when a key is pressed or the mouse is moved. Although the computer is actually doing things slower, it feels faster because the computer is more responsive.

Many years ago, I was working in medical research. There was a program written by one of the programmers that normally ran over the weekend. Unfortunately, the guy that wrote it did not write any output to the screen or disk until the program was finished running. So, on Monday morning, we were often confronted with a blank screen and we didn't know if the program was still running, locked up, or what. We had no feedback at all. (This was back in the days of DOS, so we couldn't do anything else while the program was running either.) Eventually we changed the program to add a little progress counter to the screen, so we could at least tell if the program was still running. We used to joke about how much faster this made the program. Even though we knew the extra output was slowing down the real work, the feedback made it seem faster.

This concept is what drives most multithreading development. We want instant feedback and responsive computers. Things can actually run slower, as long as we get these two things.

Blocking

The other reason for multithreading is that not all processes are CPU-bound. At some time, most programs must do something that blocks progress. We might need to read data from disk to do further calculations, or retrieve data from a database, or even get user input. While this process or thread is blocked, it would be nice to allow the CPU to do other work. Threading makes this possible. (Actually, we used to do this with a cruder form of multitasking years ago. But, threads do make it easier.)

With multithreaded systems, the CPU can ignore threads that are blocked and continue with threads that are ready for work. This makes better use of the CPU and allows us to appear to be doing more than one thing at a time. In fact, by carefully organizing our code to keep the processor busy, we can appear to be running faster by keeping the CPU busy. In this case, we are not really speeding up the code. We are just no longer wasting CPU cycles.

Multiple CPUs

Of course, the entire discussion above assumes a single CPU. If we have multiple CPUs in a system, the situation is only slightly different. We can get more work done at a time because there is more than one CPU, but the problem remains the same. We can not get more work done than we have CPUs.

In a later essay, I'll delve into a piece of advice I received a long time ago that you should never have more threads than you have CPUs (plus a few extra to to take advantage of blocking calls). This turns out to be like most multithreading advice, part right, but mostly wrong.

Posted by GWade at 10:17 PM. Email comments

August 18, 2005

Review of C++ Common Knowledge

C++ Common Knowledge
Stephen C. Dewhurst
Addison-Wesley, 2005

The subtitle of this book sums it up nicely, Essential Intermediate Programming. If someone has not mastered, or at least understood, the material in this book, he or she is still a junior C++ programmer. Although this material is necessary, it is not sufficient to make someone an intermediate-level C++ programmer.

As explained in the preface, Dewhurst wrote this book partially to save himself from explaining these same topics every time he deals with a new set of programmers. He also explains that he has not covered every important topic in the book. In order to make the book more usable, it has been reduced to 63 core points that are either central to your understanding of the C++ programming language or often misunderstood.

Although he does not go into extreme depth on every one of these subjects, Dewhurst does capture enough of the details to help you understand why the point matters and why it works the way it does. I have read almost all of these items in other books, sometimes in more detail. But, there were still a few points that I feel I understand better after his explanations.

Dewhurst begins with some topics that any one who programs in C++ should be at least somewhat familiar with: "Data Abstraction", "Polymorphism", etc. and works up through "Template Argument Deduction" and "Generic Algorithms". None of these chapters can be considered the definitive, be-all-end-all explanation of its topic. However, each is concise and covers the minimum you need to understand about that topic.

The only reason I found this book to be less useful than many of the books I've read recently is because I already understand most of the topics well. There are a few of the template chapters that I felt extended my understanding a bit, but the rest were covered in more detail elsewhere. That being said, I can see this book being of real use to any junior or intermediate level C++ programmer. If you are a senior level programmer, you might find this book useful as a reference for the more junior programmers you work with. I also think this book helps a more senior programmer recognize some of the points where a junior programmer is likely to have problems.

Posted by GWade at 05:25 PM. Email comments

August 17, 2005

More Human Multitasking

Isn't it funny how you sometimes run into the same concept everywhere at once? A couple of weeks ago, I wrote a piece (Another View of Human Multitasking) refuting some of the conclusions in a Joel Spolsky article on human multitasking. This week, I stumbled across another article, Creating Passionate Users: Your brain on multitasking, that makes pretty much the same points as Joel's essay. Interestingly for me, this essay points to some original research backing up her claims.

As I suspected, the research is specifically related to pre-emptive modes of multitasking. As I stated in my earlier essay, we've known for years that pre-emptive multitasking is not the fastest way to solve problems on a computer either. If the tasks are cpu-bound, every time-slice incurs the task switch overhead. The reasons we use pre-emptive multitasking in computers have little to do with overall processing speed.

As I said in my previous essay, switching tasks when you are blocked is the only way to get more work done in a given amount of time with a multitasking system. Just like a computer, a human can get more done with lower priority tasks that you turn to when the main task is blocked. That way, even when you can't progress on the main task, you can still make progress on something. This is what I have always meant when I say that I multitask well.

Interrupts

One area I did not touch in the previous essay, was the concept of interrupts. When an interrupt comes in, there is a forced task switch just like with pre-emptive multitasking. Unlike a computer, humans cannot store their mental state on the stack and come back to it. An interrupt pretty much makes you lose all of the dynamic state you've built up. Anything you've written down or committed to more long-term storage is retained of course. But, the part that you are working on right now is lost unless you explicitly take time to save it.

This explains why phone calls or random meetings can really ruin your flow. While in flow, it feels to me like I have more of the problem and solution space in my head at one time. It almost feels like I can see a large portion of the problem space spread out in front of me. When an interruption occurs, all of that understanding and feel for the problem space vanishes. There's no time or place to store it. So, once the interrupt is handled, we have to start from scratch slowly building up that information all over again.

Posted by GWade at 03:40 PM. Email comments

August 12, 2005

Review of Exceptional C++ Style

Exceptional C++ Style
Herb Sutter
Addison-Wesley, 2005.

Once again, Herb Sutter provides us with a set of problems that teach important lessons about the C++ programming language. Each problem in the book covers some problem that a C++ programmer might see in a particular program or design. As Sutter solves each problem, he gives insight into the concepts surrounding the problem and the pitfalls that may trip up an unwary programmer.

As usual for one of his Exceptional C++ books, Sutter spends time covering some areas of C++, like exceptions, memory management, and templates, where programmers often have problems. But, unlike many books that teach the syntax of the language, he goes deeper to improve your understanding of how various features work and why. His explanation of exception specifications and why you should not use them is extremely well done. Sutter also explains why the standard streams could be considered a step backward from printf, but that there is hope on the horizon for solutions that support the best of both worlds.

Sutter ends the book with a critique of the std::string class, showing how it could have been better designed based on what we now know of C++. For many programmers, this section alone should be worth the price of the book. The author goes through many of the design tradeoffs with an eye towards simplifying the interface without loss of functionality or efficiency. It is rare that you get a chance to sit with an expert programmer and get him to explain the design of a non-trivial class.

In addition to hard-core technical information, there is a fair amount of style advice and fun examples (how many '+' characters can you write in a row in a legal C++ program). All of which give you more insight into this powerful language. And throughout the whole book, Sutter's interesting humor lightens what could otherwise be a very heavy read.

If you are trying to improve your understanding of C++, this book will explain parts of the language that were never quite clear. Although I would not recommend this book for a novice C++ programmer, I think any intermediate to senior C++ programmer would be well-served by reading it. I plan to recommend it to the C++ programmers I know.

Posted by GWade at 08:44 PM. Email comments