Skip to main content.

Web Based Programming Tutorials

Homepage | Forum - Join the forum to discuss anything related to programming! | Programming Resources

CGI Programming Unleashed

Chapter 6 -- Testing and Debugging

Chapter 6

Testing and Debugging


Can you picture yourself buying a car that has only been tested in making right turns or at speeds less than 30 miles an hour? Probably not, and for the same reasons you shouldn't picture yourself shipping out a piece of software without extensive debugging. You, along with other intelligent consumers, expect the products you use to meet your standards for quality when used in situations that they were designed for. Whether it's a word processor that doesn't save files, a toaster that doesn't toast, or a CGI program that just sits there and never finishes or returns an error, running into something that fails to meet a user's standards is a bad thing.

Debugging applications certainly isn't the high point of programming. It's bad enough when someone else hunts them down and you have to fix them, but it's worse when you have to do it all yourself. Besides being time-consuming, it's often frustrating-here's this great piece of work that you put together, and now you have to go through it bit by bit to check everything, instead of tossing in those cool new features you thought of during last night's movie. Is debugging really worth it? You'd better believe it.

CGI applications really need debugging. The reason for this is that there are lots of variables involved in the program's function-who accesses it, where they're accessing it from, what it's being accessed with, what kind of server it's running on, and more! Assuming that it'll work fine because it seems to do what you want when it's accessed from your desk just isn't enough. You have to be sure that anyone who should be able to use it can use it.

We'll look at the phases of testing and debugging to see how you can ensure that what you've created meets your expectations and behaves after you've set it loose on the world. These phases include information on

The Process and Methodology

The best testing and debugging really starts in the planning stage. If you've planned your application out well in advance, you're less likely to have mistakes because of things that are overlooked or hastily done. When you're just in the beginning stages of writing the code, it's also easier to put messages and functions in that allow you to analyze why the problem is occurring. Just like preventative maintenance, this is preventative coding.

Does preventative coding mean that you'll have no problems at all when you go to use this application you've spent so long creating? Heck no. It's inevitable that there will be minor problems in the code- things like typos, missed semi-colons or line feeds, or just things that you hadn't originally considered when planning it out. If you didn't plan it out at all, and just typed it in between the Late Late Show and sunrise, it wouldn't be surprising to find a rather large number of those little annoying bits stuck in random places.

Because you can have problems, and they could be major or minor, there are a couple of steps you can take before making the code publicly accessible to ensure that you're not going to cause problems for either yourself, your server, or any person that might end up using the code. I'll call these the phases of CGI Testing and Debugging.

  1. Review it
  2. Isolate it
  3. Test it
  4. Debug it
  5. Test it again
  6. Go for it

Each of these has special reasons for being where it is, and each one is no less important than any other. The reason for this order is to eliminate errors and problems that can end up being compounded if you skip too far ahead. For example, if you take a completely untested script and get an error when you first run it from your Web server, it could be the script, it could be the machine, it could be the server software-there are too many variables to efficiently narrow it down.

The Review Cycle

Reviewing your code is done before it ever sees the light of day. It doesn't involve a Web server, and it doesn't need an advanced method of checking. All it involves is taking a good long look at your code.

Why in the world would you want to stare at a printout of your code, or scroll through it onscreen? Because it's easy. Out of all the possible testing methods, this is the one that's easiest to do on a bus, on a plane, sitting in the park, or even discussing it with a friend. What you're looking for is anything that seems out-of-place-any obvious omissions, any function that you thought you weren't going to include but is still sitting in the code, checking your comments (you did put comments in the code, didn't you?), and generally ensuring that what you're looking at is what you intended to create. If it isn't what you thought it was, now is the time to back off the testing phase and go back to the drawing board. After all, why test stuff you don't plan to use, or that doesn't even look complete?

The review phase is also a great time to identify possible trouble spots, or areas that are critical to the application functioning correctly. This doesn't mean that you're going over every line of code over and over again, but, rather, that you pick out spots such as where it reads data from the user, and where it's performing an operation that you're only somewhat sure will work. Mark them with comments, circle them in red pen or highlighter, but make sure you point them out to yourself. These will come in handy when you're starting to do the real testing, because anything that sticks out now should be a big red flag when it comes time to create a testing plan. If you're worried about it, it should get tested often, and tested hard.

At Your Command…

Command Line testing is the next part of the review process. At this stage of the game you can attack your program in almost a casual manner, because you have complete control over how it sees the world around it. There are no networks to get in the way, no beta software for a Web server, no extra processes. There are you, your program, and the command prompt of your choice.

Hard-Coded Data

There are several ways to use the command line for testing. The simplest method is to test with hardcoded data. So, if you're expecting someone to submit a serial number, you can create the ideal serial number. You can then verify that with the ideal case of all data the application will process correctly. For instance, take the example of Listing 6.1 for processing a form.

Listing 6.1. An example of Forms processing in Perl.
require '';

#Use the 'ReadParse' subroutine from to gather data

#Now print a header and process the data..
print "Content-type: text/html \n\n";
if ($input{'serial'}) {
else {
    print "<h1>Form Received</h1>\n";

Though this is a simplistic case, where it's just checking to see if the variable named $input{'serial'} is empty in order to determine which subroutine is run, it's still impossible to get the program to go through its paces correctly without having some real data in $input{'serial'}. That's easily remedied-just edit the script and place the value in there.

Listing 6.2. Using hard-coded data for testing purposes.
require '';

#Use the 'ReadParse' subroutine from to gather data

#Data used for command-line testing - TEMPORARY USE ONLY!

#Now print a header and process the data..
print "Content-type: text/html \n\n";
if ($input{'serial'}) {

Be sure to place the hardcoded value somewhere after the process that reads data from the source, otherwise you'll really get everything confused. You should also very obviously mark the hardcoded values to be removed later on. You wouldn't want to leave a hardcoded serial number in a program that is supposed to provide information to people based on that number. It would think everyone was the same person.

Although hardcoding values is very easy to do, you shouldn't rely on it for anything other than spot checks of the code. The main reason for this isn't that it's monotonous to go in and keep changing the values to test different things (though that's a big factor), it's that you're modifying the original script itself. Should something happen where you forget to take out those values, you're asking for problems. Or if the file is supposed to be read-only, you'll have to keep changing the permissions on it back and forth. Not a good scenario either way.

Wrapper Scripts

The next step up from hardcoded values is a wrapper script. As most scripts will be reading data from environment variables, the purpose of the wrapper script is to set those environment values to some specific values. This means that you're no longer going in and changing your primary script, which is a step in the right direction.

There are two different types of wrapper scripts: ones where you hardcode the values in them and ones where you don't. Out of the two choices, the first is obviously easier because all you really have to do is run something like the shell script shown in Listing 6.3.

Listing 6.3. A sample shell script CGI wrapper.
set QUERY_STRING=data+goes+here

This gives you the ability to go ahead and set just the environment variables you need. It then finishes by running your script. It is small, easy to make, and effective. You could even redirect the output of the script to a file, giving you a printable record of what the program's output (and/or visible error messages) is.

Another method that is slightly more involved, but gives more flexibility, is to build an interactive front end script for command line testing. This would prompt you for each of the bits of data that would normally be supplied, and also possibly include default values so you didn't have to keep typing in repetitive data. It would be much of the same process, but with a few additions here and there. The following Perl script in Listing 6.4 is an example of something of this type.

Listing 6.4. Example of passing command-line values into a CGI script.
# Generic Interactive command-line tester
print "Enter a value for REQUEST_METHOD: \n";
print "Enter a value for QUERY_STRING: \n";
exec "";

You can add whatever environment variables you might want, depending on what values you're looking for to evaluate within your program. Regardless of the language of your actual CGI program, command line wrappers can be in almost any language, as long as they can set environment variables and execute another program.

Some other possible additions to a common line testing program include modifications to allow placing input into STDIN, so that a program that reads data from a POST method can function as it's supposed to, and the ability to read all input from a file, so that you don't have to type certain information in over and over again, but output the results to a file with no difficulties.

Perl5, used in conjunction with the library, has the convenient ability to save you even that amount of effort. You can enter information right on the command line, with no wrapper script, and it'll understand what you're trying to do. For more information, see

What you're really checking for during command line testing is the general category of problems-things that look out of place, immediate errors, and other nastiness that jumps right out. Once everything looks acceptable, and you can get the program to behave in a manner you'd expect, you'll probably want to save the output of your program into a file, so that you can compare it later. This lets you know what the program was sending out before so that you can see if this is the kind of thing that is happening once you get it onto a server. This is your baseline reference.

Solitary Confinement

Once you have your baseline reference from command line testing, you're ready to move onto a server, but not just any server. You want to place the script in a location where you can safely go wrong. Remember, you're in the testing phases and anything can happen. For just that reason, you want something that meets the following criteria:

Preventing Harm to Original Data

Say your program reads in the log file and searches for a specific line. With just one little error in a script, you can wipe out the log, and lose all the data it contains. Whoops! To show you just how easy it is, look at the following line of Perl code that is supposed to open the log file:


The problem here is that the > symbol means "Write to the file," and, normally, to create a new file to overwrite whatever was already there. Whether or not it erases what was there, it's certainly opening the door for data to get overwritten, or for the entire log file to get corrupted. What the script really meant to say was:


It's the < symbol that tells Perl to open a file for reading, not writing. Although you probably won't have any errors like that in your code, it's always possible. And if you're dealing with your online Web server, you can't afford to take that chance. You might erase a configuration file, or even lose some obscure but important data that will be impossible to track down and replace.

There are a variety of ways to adjust where the files are being drawn from, but sometimes it's impossible to get around the fact that certain files that have to be in certain locations must be accessed. In those cases, you should always make a copy of the original. Even if it requires a lot of juggling to get the necessary available space, do it. Think of how much of a pain it would be to try and track down just what got changed without an original to compare it to.

Is Not Easily Accessible to General Users

What's the easiest way to keep people from getting to your script? Why, just take the network cable and…hold that thought right there. Before you go and make what could be a horrible mistake, review your options for isolating a server before yanking any cables or doing something else equally as drastic.

Separation from the Network

Taking out the network cable from a Web server isolates it, but it's going a little far. The computer is often very dependent on other machines being connected to it, for a variety of reasons. In addition, it might serve as a location for data that other people internally access, and you'll be crippling their access to what they need. If you're not experienced with networking machines and the type your server is on, removing it completely from the network isn't really the best option.

If you are experienced with networking, the type of computer the server's on, and you know the whole possible slew of effects that can cascade as a result of the machine being taken off the network,(perhaps your Web server functions as your mail server, firewall, or NIS server) you can certainly use that as an option. Even so, you should be hesitant to do so.

If you can't physically pull the plug, what other methods are there? Here are three options, in order of how easy they are to use:

Hiding the Script

Hiding the script is very easy and very commonly used. You place the script in your cgi-bin directory and don't tell anyone about it. You don't put big links to it from your home page saying "Don't click this. I'm testing a script." It might be convenient for you, but who can resist clicking a link that says "Don't click this…?" Exactly.

The problem with hiding your script is that it's never really hidden from all possible searchers. Search engines have this annoying tendency-the page that you want to show up will never seem to be there, but the ones that you least want people to know about will pop up as big as life during a search. Isn't information technology great? Although you can rely on this method for short-term tests, don't leave it there for very long, or you risk the consequences.

Securing the Script

One of the most effective ways of protecting your script, and one that's very easy to implement, is using built-in server security to deny access to a particular script. Then it doesn't matter who knows about your script; your server won't give people the chance to do anything with it. Two common methods of security permissions include a user/password scheme and general refusal based on IP addresses. Out of the two, general refusal by IP address is better for your use. If you have to keep typing in a user name and a password to get at your script, you'll get very annoyed with it very quickly.

Most servers have nice easy ways of setting these security levels-in Process Software's Purveyor for NT, you can do it right from the File Manager. If you're unfamiliar with how to do it with your particular server software, or if it supports it, a quick browse through the documentation should resolve both issues without too much fuss.

Development Machine

The best of all possible worlds, though the least commonly available method, is to have a development machine that is nearly identical to the machine that users will be accessing. Many server software packages come with a license to allow you to set the software up on more than one machine for this purpose, and refer to one as the Development Server and the other as a Production Server or Live Server. If the server software you're using is freeware, like NSCA HTTPD, then you can set it up on whatever machines you'd care to.

Obtaining a machine that's similar in configuration might be a tough job, but if you're doing something that could potentially disrupt the system, it is less effort to dredge up a spare machine, even temporarily, than to reconfigure your server machine.

After doing what you can to minimize who can get at your script and what possible damage it can do, it's time to start the testing.

Ladies and Gentlemen, Start Your Testing

There are a number of schools of thought on how to debug an application. One of these is the "pound on it 'til it breaks" school, and it works like this:

  1. Put the program somewhere.
  2. Randomly, and aggressively, do anything you can think of to it.
  3. Fix whatever appears to be broken.

If you have a couple hundred monkeys with keyboards and some spare time, this can be a great testing method. Of course, you could just as easily have the monkeys write the code itself and hope for the best. This isn't to say that some good old fashioned boot stomping on the application doesn't help as part of an organized testing situation, but it ends up wasting your time as a programmer. How do the testers know what's really a bug, and what's just a function? Who instructs them? What if they don't know what kind of results you're looking for?

Sure, you could test a search engine by having people type things into it and seeing if they get a response result. But what if all the responses point to the same place, even though the labels for the pointers say they're different? And what's to say there isn't some combination that's not being tested? You need to get organized to get results.

The Testing Process

To really do some testing, you need two things: people to do the testing and a plan of attack for how you're going to do it.

Marshal Your Forces

Testing an application by yourself is not the best possible option. If you're testing by yourself, you normally have a pretty short list of resources-you, some caffeine, a computer, and lots of time. You're just one person, and you're also biased: you wrote the code. This means that you might, even subconsciously, miss seeing small problems, because you relate them to something else that you were thinking of adding later, or that you didn't take out in the first version. It also means that it's going to be a long time before your program can be completely tested, and that, while you're working on fixing any problems, no other testing is taking place.

By corralling a few of your friends, co-workers, neighbors, or relatives, you can create a team of testers. These don't have to be programmers, they don't even have to be familiar with computers. All you have to do is show them what to do and let them go after it. The purpose, after all, of CGI programs is to let a wide variety of people use them to perform a function. Sometimes the problems you can find in an application aren't bugs, they're design flaws. You don't have to admit them to people necessarily, but you should certainly be willing to be flexible. After all, you're not necessarily the one who's going to be using the program most of the time.

The number of people you need for testing your program is relative to the importance of the finished product, as well as the anticipated number of users. If it's an unimportant system administration tool that you and maybe two other people will be using, then just you and those other two people should be more than enough. If it's something more important, like an online tax return helper, you better start calling in favors from everyone you know.

Once you have these piles of people, there's an important thing you need to think about: What the heck are they going to do? You can't sit them down in front of a machine and say "OK, test it!" You have to create an organized plan for which elements of the program should be tested in what order, and how. Even if you're stuck doing it by yourself this is necessary to keep both your sanity and your time well under control.

Elements of a Testing Plan

A testing plan is like a battle plan-you have your objectives, you know your resources, and you analyze the best way to take control of the situation. You have to approach it in an organized and methodical manner to make sure you, and any people you have helping you, don't miss something that's going to harm the program when it's found later.

You've already completed two parts of a testing plan: reviewing your work and testing it on a command line. Now you need to organize your methods into more Web server-focused efforts.

First, look at the program and see what it is you're allowing people to do. Are they searching for text? Filling out a survey? Trying to be directed to a random link? If you're accepting input, ask yourself the following questions:

For every action that you allow the user, you need to verify the data that corresponds to that request. If you ask them to type in a serial number, are you checking to see if it follows a specific convention? Are you checking to see if they enter anything at all? One of the first things you can do is create a short list of what kind of data you're expecting. Table 6.1 shows how this might be laid out for our sample.

Table 6.1. Laying out data to be used in your program.
DataExpected Format Special Considerations
NameText, up to 40 characters Generates error if left blank
E-mail AddressText up to 60 characters that contains '@' symbol Generates error if left blank or if '@' symbol not present
CommentsText, up to 500 characters None

This immediately gives you something to experiment with. If you fill out only one field, you should be getting at least one error (preferably two). If you try it, and it merrily accepts just the one field you entered, you know immediately that there's a problem. You can go ahead and check any elements that require special formatting, such as the e-mail address. If you type in "foo@bar," it should generate an error. If it doesn't, you've got another problem.

This kind of testing is the first step in verifying input, and is called Boundary Testing-you know what you're expecting to receive, and what limits you've placed on what people should type in. You need to verify that the program behaves as expected when accepting the data, especially if the data does not fall in the accepted value boundaries.

It's very important to keep in mind that, just because you've somehow limited what people can type in (through a form tag or other front end interface), you can't guarantee that data coming in will conform to those specifications. Remember the command line tests, where you can specify your own QUERY_STRING and other data? It's easy for someone to write a program that does the same kind of thing, except, instead of executing a local script, it executes a remote one such as yours. This isn't a very common thing to encounter, but your script shouldn't rely on the "Well, that'll never happen" theory. If you do, Murphy's Law steps in and beats you about the head and shoulders when you least expect it. If data is supposed to be in a particular form, or of a certain size, your program won't choke on things that don't meet its criteria.

Besides allotting time for some Boundary Testing, you should take examination of the data to the next step-Input Verification. You want to ensure that once data gets to your application it's being interpreted correctly and not mangled by some other process. This can be done with something as simple as a feedback script, which just echoes out what was typed in, before the script continues with the rest of its functions. Listing 6.5 shows an example of placing Input Verification at the beginning of an application.

Listing 6.5. An example feedback script, for examining received data.

require '';

#Grab the incoming data and place it in variables

# Let's see what we've got..
print "Content-type: text/html \n\n";
print "Name received was: $input{'name'} <br>\n";
print "Serial number received was: $input{'serial'} <br>\n";
print "Comments received were: $input{'comments'} <br>\n";

#Do the rest of the program

Once you've verified that you're actually getting the data you think you're getting, it's time to see what the processes in your program are doing with it. Based on the input, which you can now verify if it's correct or not, your program should be able to run through its processes correctly and generate the output you're expecting.

Running Through the Processes

As mentioned earlier, you could easily just bang on the program randomly and look to see what happens. This isn't going to get you very far very fast. What you need is an organized testing plan that covers not only every function, but every situation that could be encountered. As your application gets more complex, this becomes pretty involved.

A good testing plan is one that covers all the functions in the application one by one, as well as en masse. Just because the first subroutine works is no reason to celebrate. It's good, but the whole application has to work before you can put the application up for general access.

The first step towards this is to review your code and see what major sections of functionality there are. If there's only one, you can just break that out into a list of specific functions. If there's more than one, each one of those parts should comprise a testing category, such as Receiving User Data, Checking Serial Number, Saving Data to Log File, Creating HTML Output, and so on…whatever components best describe sections of work that are done in your program.

Once you have these sections, review what each section needs in order to do its job. If you need a valid serial number before going through the portion of your code where it generates HTML output, any testing sequence that is just supposed to target the HTML generating portion will have to take that into account, through hard-coding or some other bypass method.

Is Automated Testing Right for You?

Automated testing is tough to set up, but it has the advantage that once it's set up it can make testing an application very easy. The simplest form of automated testing is a command-line script that reads test cases from a text file, and then sends that test data to the application and records the output to a file to be examined later. More sophisticated options include custom-made programs that test application speed and results against expected output. They end up recording problems or desired test data to the file, reducing the amount of time that anyone has to spend sorting through the results.

As a general rule, the more your application is seen, the more seriously you should consider automated testing. If it's for a commercial service or for something that should be self-sufficient, that adds more value to automated testing as well. You should take into account, though, that there's a point where automated testing efforts are more work than creating the original program. That's a bit much, but it's up to you to determine if what you're creating needs that much effort.

Whether you go for automated testing or not, you'll need to create test cases to check and make sure that your program behaves as expected under a fixed set of circumstances. You can then give this plan to other people and let them do some of the work for you.

Debugging the Application

Bugs happen, but you can squash them. If your testing plan has been thorough, you might uncover a whole slew of problems. Now the effort needs to be focused on figuring out why they're occurring, and trying to resolve them.

First, let's look at some of the most common errors. These are normally accompanied by Server Error codes, and normal doesn't tell you much of anything that's of use. However, as you become more comfortable in debugging applications, you'll learn that several of the error messages point to some frequently made mistakes.

Common Errors

The most common types of errors are ones that the server can help you resolve, though not willingly. Simple typos, file permissions, and other easy-to-make mistakes can cost you hours of debugging time if you're not familiar with what they could be pointing to. This section introduces you to the three most common errors the server sends back during the execution of CGI scripts, and explains some of their most frequent causes.

Error 404 - Not Found

The most obvious meaning of this message is that your script can't be found. Check the URL that points to the script and make sure that the file is indeed there. A common cause of it being in what you believe is the right place is where the DocumentRoot of the server is set to. This is the location that serves as the base directory for all other directories to be resolved from. So, if your DocumentRoot is "/usr/stuff/httpds/", the URL of "" really points to "/usr/stuff/httpds/cgi-bin/". Is that where the file is?

Another very frequent cause of this error is when the server doesn't get any output, or gets corrupt output due to an error taking place. If you'd like to cause this error, you can do it pretty easily-leave off a trailing semi-colon (;) in a Perl script line. The server just loves that. Normally, if you've checked your program out on the command line, or compiled it in C, you'll have encountered this error beforehand and have resolved it. If you made any changes to your script recently and this suddenly starts happening, you'll have a pretty good idea what the root of the problem is.

403: Forbidden

Remember the method discussed earlier, in the section entitled "Securing the Script," of hiding your CGI script so that other people couldn't get at it? Well, it looks like your server thinks you're one of those people. This error is normally the result of one of two possible situations whereby the script can't be executed.

The first is that the file permissions, as determined by the operating system, aren't set to allow access. This is more common on UNIX systems, where file permissions are a fact of daily life, rather than an afterthought. What you'll want to check is that the script that you're referencing is able to be executed by the server. How do you check that? On UNIX systems, the configuration files determine what user the server tries to run as. It could be root (the all powerful system account, which is a very bad idea), your user account (not quite so powerful, but still not great), or nobody (a generic account that the system can use for process, it is the best choice). With the configuration file set to the correct user (most likely the nobody account), check the directory that holds your CGI script.

The ideal situation is that the file will be owned by the nobody account, and executable only by the nobody account. If you use the ls -lg command on UNIX, you'll see who owns the file, as well as who can execute it. Without delving too far into the wonders of the UNIX world, here's how you can ensure that nobody owns the file:

  1. Switch the current directory to the one with the CGI script.
  2. Type "chown nobody script" (where script is the name of your script file).
  3. Type "chmod 700 script" (where script is again the name of your CGI program).

You are now all set. What you've done is changed ownership of the file (chown) to be nobody. Chmod 700 really means "Modify this file so that only the owner reads, writes and executes permissions."

There are times when the CGI script will have all the right permissions, but files it needs to use (especially output files) can't be accessed. Check to make sure that if you're creating, reading, or modifying files in any way that full access to both the files and their directories is available to 'nobody' or whatever user the server is running as.

The other possible situation that can cause the 403 Forbidden error is when the server's own built-in security has taken over, and doesn't think that you should be able to use that file. Most servers insist that a particular directory be the only one that people can execute scripts from, such as cgi-bin or scripts. If your CGI program is in that directory, then check any security-related functions that your server has, such as the ability to deny access to certain directories except for special users, or restricting access by IP addresses, or even needing an explicit list of what files can be accessed. How and where you modify those elements is up to your server, but it shouldn't be too hard to track down.

500 Server Error

The server is having problems doing something, but what? Don't worry, there aren't too many things that could be going wrong, even though you'd think the error could try to be more specific if this were the case. The worst possible case is that something happened to interrupt communications between the server and the CGI process, such as someone abruptly terminating the script. More commonly, however, the CGI script has failed to provide the server with instructions for how to deal with it's output: it hasn't provided a Content-type: header.

Some servers and browsers will assume a Content-type: header of text/html if there's another header element, such as the Window-target (the header for dealing with frames in Netscape). Don't assume, though, it's always better to make the declaration explicit.

Make Use of Error Logs

Error logs, and even general server logs, are great sources for debugging information. On most ncSA HTTPD servers you'll find an error_log file, normally located in the logs subdirectory of the directory that contains your main server process. Any real error message that it generates will be contained in it for your review, which will also help if you're not the one doing the testing. The more things that are written down to check through later, the better.

When looking through the error logs, be sure to narrow down which error you got while running the script between specific revisions. If you changed the section of the code that generated HTML, and the server started registering more errors after that point, you know right where to go. Because error logs contain time as well as the origin, you can approach this in one of two ways:

  1. Try different revisions of the code from different machines. This will give you a different source IP number to look for.
  2. Write down the time that you modified the code and what was changed. This is good general coding practice, and normally done in internal revision comments. But, because you normally don't include the exact time of the change in the revision notes, you might get sidetracked if you start making lots of changes.

Debugging Flags

A debugging flag can come in several forms. One form is a check that forces the program down a particular path if it's in debugging mode. This is much like hardcoding data in the application-it allows you to ignore possible variations in certain sections by skipping them entirely or by feeding them data that you can be sure is expected and needed.

The other kind of debugging flag is nothing more than a print statement that happens to announce when something's happening, or to show what the value of some element of data is. If you're stuck, and can't figure out where the problem is occurring, this is an easy way to check.

Certain special cases exist when dealing with functions such as Internet Server API (ISAPI) DLL functions. More information on debugging ISAPI DLLs is given in Chapter 25, "ISAPI."

Re-Testing Your Application

What do you do once you've fixed all the problems? Go through it all over again. You can never be sure that fixing one problem didn't cause two more, especially if they're all tied together. This is when the advantage of a testing plan comes into full effect.

The first time through you looked for cases where things didn't work, and saw what the output was. Does it look different now that you run through it again? If you're using automated testing scripts, compare the output of one testing round to another. Are there inconsistencies that could indicate a deeper problem?

Problems seem to come in layers. By peeling back what seems to be the problem, you might be able to fix a symptom, while the root of the problem remains. You have to dig deep enough until you're sure that what you've produced is as stable as it can be.


Testing isn't easy, and it takes a lot of concerted effort and planning to be sure that what you've developed meets your needs and those of the people who will be using it. Take your time, wherever possible. Plan out what the script needs to do, and how it will meet those needs, before leaving the planning phase and entering into the coding phase. Once you're in the coding phase, look through very carefully to identify what areas may be a concern, and be sure to pay special attention to them without skimping on other portions. Don't rush through any one phase, or give any one section of your code less attention because it doesn't seem like it should have any problems. Even one tiny typo can ruin your whole day.

Remember, debugging methods and special tools exist for almost every language and situation. If you take it slow and make use of all the things that are available to you during testing, you'll rarely spend time going back to fix problems, and you can concentrate on building the next cool program while people enjoy your other ones.