Code Tips, Tricks and Advice

Last Modified: January 31, 2001
Ken Mayer, Senior SQA Engineer
dBASE, Inc.
With a LOT of help from Gary White

Example files available in

NOTE: This document was originally written for Visual dBASE 7.0/7.01, it has been updated for dB2K (release 1) to include information about new properties, events, etc., and any new controls since the document was first written. If a control is new it will be noted. In updating this document, the images were left "as is" unless it was felt to be absolutely necessary to change them ...

In addition, this document refers to dB2K a lot, but unless it is about a dB2K specific aspect, the text can be used for Visual dBASE 7.0 through Visual dBASE 7.5.

The Purpose of this Document

This document is going to cover a wide range of topics. This is because each individual topic is not really enough to create a whole "HOW TO" document, but each topic is one that people end up discovering, or re-discovering on a fairly regular basis in the dBASE Newsgroups. This is an attempt to try to cover a lot of the more "commonly asked questions" in the newsgroups in one place.

NOTE that there are several forms that come in the TIPS.ZIP file for this HOW TO (see above) -- you should read this document and then examine the forms -- they may not make a lot of sense until you do ... the forms are referenced in the text below.

One last note: This document was written very ad hoc -- i.e., as ideas came up, or specific questions got asked in the newsgroups, and so on. Much of it assumes that you have some knowledge of programming techniques, knowledge of how to use the form and/or report designers, and a lot more. To that end, however, keep in mind that techniques shown for code here, can often be used in PRG code, forms, reports, datamodules, and so on. One of the best aspects of dBASE, and I believe one of the reasons the software is still alive today, is that it is very flexible. If you run into a problem trying to do some specific task in a specific manner, the chances are very good that there is at least one more way to do whatever it is you are trying to do.

There is no way that I can possibly anticipate every situation that someone might need to do some task -- your best bet is to check the newsgroups if you can't find information you need in the HOW TO documents ...

Topics Covered In This HOW TO Document


Most users of dB2K get a bit daunted by the fact that in addition to a lot of new objects, they are going to have to learn to use SQL.

The first thing you should know -- most developers using dB2K know very little SQL. The one statement that everyone needs to know is:

   SELECT * FROM tablename

And of course, there are a lot of variations on this.

The reason that you really don't need to know much, if anything, more than that, is that with the data objects in dB2K, you can access practically any feature you might need without having to know any SQL.

That said, there are of course a few things that you may wish to know about SQL, and I have attempted to touch on these in the notes below.

You can narrow down your field list if you really have to, by using a list of fields where the star (asterisk -- *) appears in the sample statement above, but for the most part, unless your tables have a huge number of fields, this is not going to make much difference in load times.

You can filter the data using the WHERE clause. The WHERE clause must return a string to the SELECT statement, even if you are using it to filter against date fields or numerics. We'll be looking at filters in more detail in another part of this document, and specifically the OODML methods of filtering the data, but we will also discuss the WHERE clause.

You can order the data on any fields. While this has the advantage that you don't need a bunch of index tags, it has the disadvantage that, with local tables, you will end up with a read-only query.

You can perform SQL joins, but again on local tables, you will end up with a read-only query.

Why do these read-only queries happen? This is due largely to the implementation of SQL in the Borland Database Engine (BDE). If you work with a "SQL Server Database" (Interbase, Oracle, etc.) then you will not end up with these read-only queries as the BDE "passes" the full SQL statement to the server rather than attempting to execute it locally. With local tables, the database server is the BDE, and so it has to deal with these issues. For whatever reason, it creates read-only queries in these cases. Maybe future revisions of the BDE will add more functionality for local tables.

Can you use the SQL Designer? Well, yes, but ... there are a few problems with this tool. The first and most important is that it was actually written for other languages and imported into dBASE. The problem this creates is that in most programming languages, the semi-colon ( ; ) character is an end of statement character. In dBASE, it is a statement continuation character. The SQL designer usually creates some VERY long SQL statements and stores them in the .SQL file. These get wrapped in the SQL file. BUT, they do not have the semi-colon at the end of each line of the statement. If you try to use this in a query in dB2K, you will get errors.

Another problem is that the SQL designer does things the hard way -- it creates a complete field list, where you might be better off using the wildcard "*" ... and there are probably other situations that it's not good to use it for.

Generally I find it's easier to use it to figure out how to do something with the SQL Designer and then modify the streamed out code for my own use, than it is to use the actual .SQL file in anything, or to just write my own SQL statements from scratch with a good book handy and try to see if they'll work. (Example: I discovered, while working on another HOW TO document, that I could use a FULL OUTER JOIN to combine some tables for a report ... but I didn't use the code exactly as it streamed out of the designer ...).

If you have books on SQL, you will find that SQL is a pretty rich database manipulation language. You will also find yourself coming up short when trying to use it fully in dB2K (or any other product that uses the BDE ...) on local tables. Frankly, for the most part, you don't need to know very much SQL in dB2K -- the data objects will handle nearly anything with the properties, events and methods that are programmed into them.

Back to the menu

One item you may find particularly useful, especially since the OODML does not have a fast method of doing this, is the ability to replace the contents of a field for all rows rapidly.

It is possible with the OODML to loop through all rows, but in the XBase DML, the REPLACE ALL command was used to do this in a single command. Using a for/next loop or a do/while loop to go through each row in the rowset is a slow process for a large table.

Instead, you should consider the local SQL command UPDATE. This command is the SQL equivalent of the REPLACE ALL command, and is generally pretty fast.

The syntax is:

   UPDATE tablename ;
      SET fieldname=somevalue, fieldname2=someothervalue, etc.;
      WHERE somecondition

The "WHERE" clause is optional (just like the REPLACE statement in the XBase DML has an optional FOR clause).

An example might be a need to replace a logical field for all rows in a rowset with "false":

   UPDATE MyTable SET myLogicalField = false

If you needed to have a condition, let's say a date field based on some value:

   UPDATE MyTable SET myLogicalField = false;
          WHERE myDateField <= date()

Back to the menu


Ok, this topic is covered pretty thoroughly in Alan Katz' OODML.HOW. Why am I bringing it up here? Alan's discussion is sort of high-level -- he discusses concepts very well, and does provide examples, but there are things he doesn't cover, and I will attempt to discuss the ones I can think of here.

There is another HOW TO document in the Knowledgebase that discusses comparisons of XBase DML to OODML -- it is titled "XBase to OODML". If you are familiar with older versions of dBASE or other XBase languages (FoxPro, Clipper, whatever), you may wish to get read this document ...

While the OODML can handle practically everything that the old XBase DML can, there are a few shortcomings, and I felt it a good idea to give you some ideas on how to get around those here.

Set Skip
As has been pointed out in the dBASE newsgroups, the XBase DML command SET SKIP is not completely covered in the OODML in dB2K. If you use the masterRowset/masterFields (for local tables -- for SQL Server tables the equivalent is the query's masterSource property), you will get the "standard" skip -- as you navigate through the parent table, the child table(s) will keep up.

However, some developers need to be able to do other things that SET SKIP allows (like navigating through the child table, and having the parent keep up, and/or being able to see records that have no matching parent or child ...) The OODML does not currently have any way to do these things. So, in some cases, you may be stuck using the XBase DML if you have a need to use this capability.

One solution to "skip" in the child rowset and have the parent keep up is to use the following code -- it assumes a "next" button on the form, and also assumes you have your masterRowset/masterFields (or masterSource) set properly:

   function nextButton_onClick
      if form.childquery.rowset.atLast() or;
 // move the parent row

This would solve the problem explicitly mentioned above (code by Gary White).

Another solution is given in the online help for Visual dBASE:

Override the next() method of the detail table. For example:

   function next( nArg )

      if argcount() < 1
         nArg := 1                           // Skip one row forward by default
      if not rowset::next( nArg )            // Navigate as far as specified, but
                                             //   if end of detail rowset sign( nArg ) ) // Move forward or backward in master
        if nArg < 0                          // If navigating backwards
          this.last()                        // Go to last matching detail row

Then navigate by calling next() in the detail rowset not the master rowset, as you would with SET SKIP.

New in dB2K: The rowset object now has a property masterChild which defaults to "constrained" -- it has two states, "constrained" or "unconstrained". If it is "constrained", navigating in the child rowset will constrain the user to rows that match the parent. If it is "unconstrained", navigation in the child rowset will ignore the parent/child relationship, and the user can move around all over the place.

Back to the menu

This is an often touted feature that really confuses people.

What exactly is "lookupSQL" and why should you even care? How many times have you tried to use proper "normalization" for your data, to the point that you have a lookup table with a code field and some values associated with the code in other fields, and had to write the program code to get your primary table to interact with the lookup table?

Isn't that a lot of work? Loading arrays to display data in comboboxes, scanning through the data to find the proper code and storing that value in the primary table ... all of that can be a lot of code.

Not any more! Enter a new feature and a really spiffy one (even if it's not as complete as some of us would like) -- the lookupSQL feature of the field object is designed to do the work for you.

In the samples directories that ship with Visual dBASE are some tables that use these features. In the Mugs sample, the customer table has a field called STATE ID, which is a two character field. This stores, as you might imagine, the standard US two character state codes (CA = California, etc.).

For your users, however, you may wish to display the full state name. To do this using the OODML of Visual dBASE 7, what you need to do is drill down to the STATE ID field of the table, and find the lookupSQL property (the steps are below) and enter a simple SQL SELECT statement to select the fields you need in that table.

For this to work properly, you must select the code field in the lookup table as the first field (the type must match that of the field you will store it into), and then the field you want to display. If your lookup table is designed that way in the first place, you could simply use:

   select * from lookuptable

To see this in action, if you want to follow along, try the following. (This assumes that you have not modified the BDE Aliases that are set up for you when you installed dB2K.)

If everything worked properly, you have just set up the lookup. To see this, drag several fields from the customer table over to the design surface of the form. Make sure you grab the state id field. When you drag the "STATE ID" field to the design surface -- notice that you get a COMBOBOX! This is automatic -- you do not have to do anything to make this happen.

WARNING: Do NOT set the dataSource for a combobox that uses the lookupSQL -- this is generated automatically for you by the lookupSQL property (and the underlying code for the lookupSQL), and you may have unpredictable results if you attempt it ...

If you run the form now, and scroll through the data, you will see the value in the combobox automatically update. This is done because of the lookupSQL property that was set for the STATE ID field.

There is one interesting side-effect. If you need to query the value of the STATE ID combobox (the one that appears on screen) or the field itself (form.rowset.fields["STATE ID"].value), you will get the value that appears in the combobox, NOT the value that is stored in the table!

What if you need to extract the values that is in the actual "STATE ID" field? Luckily, there is a way to get to it. The lookupSQL property automatically generates a lookupRowset for that field. The syntax to get to the real value of the field may appear strange, but it works:

   form.rowset.fields["STATE ID"].lookupRowset.fields[1].value

If you want to see what's really stored in the state id field while the form is running, this is how you would do it. (If you think about it, this makes sense -- the lookupRowset must have two fields (minimum), and field 1 must be the value in the table you are performing the lookup from and field 2 is the value you wish to display to your user.)

If your lookup table had more than two fields, and you wanted to look at an associated field that wasn't the one stored in the field, and wasn't the one displayed in the combobox, you can get to that by changing the '1' in the above statement to the appropriate field number, OR you can substitute the field name in quotes.

One other aspect of all this -- while I generally recommend that you don't bother using the "ORDER BY" clause in your SQL select statements (because they create read-only queries), your lookup table will be read-only by definition, and you may wish to add an "ORDER BY" to the field being displayed, so that the data will be in a sequence that is easy to determine. If you had a much larger lookup table, it could be very confusing trying to find the item you were looking for -- using the example above, you might set the lookupSQL property to:

    select * from state order by state

Unfortunately, there is one place where the lookupSQL falls on its face -- that is the ability to filter out the data displayed on a form using the lookupSQL. You can add a WHERE clause (standard SQL), and the filter will be respected. This is great, but ... if you need to dynamically change the filter, you are going to find that this does not work -- once the lookupSQL property has been evaluated, that's it. It is possible to get around this by reasserting the field's lookupSQL property and the dataLink for the field (not the dataSource). Note that leaving one or the other out will not work:

      form.rowset.fields["State ID"].lookupSQL = form.rowset.fields["State ID"].lookupSQL
      form.COMBOBOXSTATE1.dataLink = form.COMBOBOXSTATE1.dataLink

Gary White has provided a handy little form to see this in action, which uses the CUSTOMER and STATE tables in the MUGS sample. If you add a state code the new state should appear in the combobox when you look. (The form is called LKUP.WFM)

In addition to all of the above, you may wish to examine the last part of the filter section of this document, as it examines using the WHERE clause of the SQL Select to filter data ...

Gary created another form (which is in this .zip) that is based on LKUP.WFM, but allows you to filter the lookupSQL, using a WHERE clause, which gets re-evaluated based on other changes. This can be very useful as well. The form is: LKUPFLTR.WFM. You could expand this to use two comboboxes, for example, where the values selected in the first combobox were used to filter the values in the lookupSQL in the second combobox. (For what it's worth, I've done this, it works great [KM] ...)

In doing more testing, Gary did find that you can actually display, on a form in an entryfield, a field in the lookup table used by the lookupSQL. If you are not working with a unique value in the second column, then you will not always find the match you were looking for. When you display a unique value, this problem goes away. (The example uses last names, which in many cases will not be unique.)

To see this in action, take a look at the form: INVOICE.WFM. (The lookupSQL is the enabled combobox, the datalinked entryfields in that part of the screen are from the lookupRowset!)

There is also a bug in the designer that prevents you from assigning datalinks to the entryfields with data in a lookupRowset's fields. If you enter a reference for the datalink, such as:

      form.invoice1.rowset.fields["Customer ID"].lookupRowset.fields["LastName"]
The form designer will truncate this to:


LookupSQL in Reports
You can set up a lookupSQL statement for a field that used in a report. However, note the following:

NOTE: While this document doesn't explicitly discuss concepts covered better elsewhere (such as Alan Katz' OODML.HOW), you may wish to consider using a DataModule if you are using lookupSQL -- it means that you only have to define it once, and can use it in multiple forms.

Going along with this idea, you can also set your lookupSQL statement as a custom property of a field in the table designer.

Back to the menu

A rather interesting and useful feature that a lot of folk don't understand well is the updateSet class.

This particular object allows you to copy tables, append from tables, update from tables, a combination of append and update, and handle errors generated if the table you are updating has a primary key.

The biggest problems I've seen on the newsgroups deal with simply using this object, but that may be because most people don't understand it. So, before we get into details on using it, let's examine it. The big thing to realize here is that the updateSet object is used to work with two tables -- you must name those.

The updateSet class has the following properties and methods:

Is your head spinning yet? Are you seeing some possibilities here?

As noted above, you must always have a source and a destination. The changedTableName, keyViolationTableName and problemTableName properties are optional, although very useful in some circumstances. The indexName is needed for most of the methods, but not all.

Copying a Table or Rowset
Let's take a look at some examples. The first is copying one rowset or table to another. To do this, we must have a table or rowset that is the source -- i.e., what we are copying, and we must name a table or rowset as the destination, or the new table.

At it's simplest, we can do:

   u = new UpdateSet()
   u.source      := "FISH" 
   u.destination := "COPYFISH"

This will copy the rows in the "FISH" table in the samples directory to "COPYFISH", creating a new table.

Not very exciting. We could do this as well with:


So, what's the big deal? Well ... among other things, we can create tables of different types. If you were using a local table, and wanted to copy that table to a BDE Aliased SQL server, you could:

   u = new UpdateSet()
   u.source      := "FISH"
   u.destination := ":myalias:Fish"

This should create a new table in "myalias" (as it is defined in the BDE), and convert the table to that appropriate type. This is one way to get your table into an Interbase database, for example.

However, another, and potentially even more useful thing to note is that we can create a rowset from a query control, and copy just the rows that match whatever conditions we need. The example shown below uses the customer table in the MUGS sample, and assumes that you have the MUGS BDE Alias set in the BDE (which was installed there, but you may need to re-create it). It will copy only those rows that have the "STATE ID" containing a value of "CA" (California).

   d = new database()
   d.databaseName     := "MUGS"           := true
   q = new query()
   q.database         := d // assign mugs database ...
   q.sql              := "select * from CUSTOMER"           := true
   q.rowset.filter    := [:state id: = 'CA']
   q.rowset.indexName := "Last Name"

   u = new UpdateSet()
   u.source           := q.rowset
   u.destination      := "CACUST"

   // cleanup           := false           := false

In this example, the new table will be created "locally" -- i.e., wherever you were currently working, as opposed to in the MUGS database. If you wanted to copy this to the MUGS database, you would have set the destination as ":MUGS:CACUST.DBF".

NOTE: You can use a query object's rowset for either or both of the source and destination rowsets. I.e.,

      q1 = new query()
      q1.sql = "select * from fish" = true

      q2. = new query()
      q2.sql = "select * from copyfish" = true

      u = new updateSet()
      u.source := q1.rowset
      u.destination := q2.rowset

Note that in the case just mentioned you must give a table extension if the table is a local (.DBF or .DB) table type. (Otherwise an error will occur.) It is probably a good idea to always set the extension for your destination table if you are using local tables.

In the case of the copy method, you cannot create a rowset that does not exist, i.e.:

   q2 = new query()
   q2.sql = "select * from sometable" 

   [...skipped code...]
   u.destination := q2.rowset

Since this rowset does not exist, you will get an invalid reference error.

NOTE: Using the copy method will NOT copy the index tags of the original table. This is a serious drawback, and while I believe it is working "as designed", is a bit annoying.

Using Copy() with Calculated Fields
A user in the dBASE newsgroups mentioned this recently -- there are two ways to get calculated fields in your queries in dB2K, unfortunately, the UpdateSet object only recognizes one of them. Note the following:

   q = new query()
   q.sql = "select * from fish" = true

   // add calculated field:
   f = new field()
   f.fieldName = "doubleLen"
   f.beforeGetValue = {|| this.parent["length cm"].value * 2 }
   q.rowset.add( f )

   // the calculated field shown above does not get
   // copied. 
   // do updateset:
   u = new updateSet()
   u.source = q.rowset
   u.destination = "test"

However, if you use the ability to create a calculated field in the SQL Select statement (more details can be found in the local SQL online help reference that ships with dBASE, and in XDML to SQL in the Knowledgebase), you can get that new field copied. The syntax is a bit more complex, but it does work. Note that in the following we are assuming all fields (and the calculated field) -- if you wanted to limit the number of fields copied you could do that here as well:

   q = new query()
   q.sql = "select id, name, species, fish.'length cm', fish.'description',"+ ;
           "fish.'ole graphic',fish.'length cm'*2 as doubleLen from fish" = true

   // the calculated field shown above does get
   // copied:
   // do updateset:
   u = new updateSet()
   u.source = q.rowset
   u.destination = "test"

Appending Rows From Another Table
The append method is useful because it allows you to add rows with matching structures from one table to another. The syntax will be similar to above, with the added ability to use a keyViolationTable -- if your table has a primary key, and a row in the table you are adding from (the source) has the same value as a row in the table, you can have the updateSet copy that table from the source to a new table named in the keyViolationTable property.

NOTE: The append and appendUpdate methods assume that the structures of both tables are the same. If they are not, you will not receive an error, but you may receive some interesting results ...

NOTE: A Key Violation will not occur if the primary key is an autoincrement field. Instead, the autoincrement will do what it is supposed to, and new rows will be appended ...
The sample below is based on the FISH sample in the samples tables, rather than the customer table -- see note above about key violations, since part of the reason I am doing this is to show you that ability ...

The Fish table has the name field set as the primary key. For the example I am using, I have made a copy of the fish table locally, deleted some rows, and added a couple. In addition, I have also copied the FISH table itself locally for testing. The first example does not set a keyViolationTableName, the second does.

   d = new Database()
   d.databaseName     := "VDBSAMPLE"           := true

   q = new Query()
   q.database         := d // Assign database ...
   q.sql              := "select * from FISH"           := true

   u = new UpdateSet()
   u.source           := "FISHCOPY"
   u.destination      := q.rowset
   u.indexName        := "PrimaryName"


When this is completed, the rows that might cause a keyViolation simply do not go anywhere -- they are not added to the Fish table as they would have caused a violation. No error will occur, which may be a problem for your users.

If, however, the same code is executed with the addition of a keyViolationTableName property which points to a table (it does not have to exist -- it will be created if necessary), it should contain any rows that cause a violation.

   q = new Query()
   q.sql              := "select * from FISH"           := true

   u = new UpdateSet()
   u.source           := "FISHCOPY"
   u.destination      := q.rowset
   u.indexName        := "PrimaryName"
   u.keyViolationTableName := "COPYERROR"


NOTE: It appears after some testing that the keyViolationTable does work when using local tables if a BDE alias is used, but you have to do it differently. The following code has been shown to work:

      d = new Database()
      d.databaseName := "VDBSAMPLE"       := true

      u = new updateSet()
      u.destination := ":VDBSAMPLE:FISH.DBF"
      u.source      := "FISHCOPY.DBF"
      u.indexName   := "PrimaryName"
      u.keyViolationTableName := "COPYERROR.DBF"
      u.append() := false
      release object u
In this example, "FISHCOPY" is a local table, and the keyViolationTable will be created in the alias! What is happening is that the keyViolationTable is created in the same folder (directory) as the destination table. This is probably working as designed.
Updating Rows From Another Rowset
The update method allows you to update a table from another table with the same structure. What this does is look for a match between the two tables, and if a row matches (this requires that either a primary key be used, or that the field(s) used in the index expression not be changed) but has been modified in the second table, changes will be made in the main table.

This has the added capability of copying any rows that have been changed to a "changedTable" before the update actually occurs. This may be very useful in the case of errors.

For the following example, I will use the fish table as used in the last append examples, and which assumes that I have changed the data in a couple of records.

   q = new query()
   q.sql              := "select * from FISH"           := true

   u = new UpdateSet()
   u.source           := "FISHCOPY"
   u.destination      := q.rowset
   u.indexName        := "PrimaryName"


NOTE that it is necessary to use the updateSet's indexName property for this to work properly, and the indexName must exist in the destination table/rowset.

The changes get made to the rows that match, but since I did not use the changedTableName property, the data was not saved before the update was made. In addition, any new rows are not copied over (see below).

   q = new query()
   q.sql              := "select * from FISH"           := true

   u = new UpdateSet()
   u.source           := "FISHCOPY"
   u.destination      := q.rowset
   u.changedTableName := "OLDROWS"
   u.indexName        := "PrimaryName"


Interestingly, it appears that any row in the destination table (the one we are updating) that matches any row in the source table (the one we update from) will be copied to the OLDROWS table, even if no update actually occurs.

Updating Rows and Adding NEW Rows
The big deal here is that if you use the update method, any new rows in the source table are not added to the destination table. You can use the appendUpdate method, and this will work exactly like update, with the added feature that any new rows will be appended. While we could add a keyViolation table to this, it would be pointless, as any primary key matches should be caught automatically in the update process. Only rows that do not match the index expression will be appended to the destination rowset.

   q = new query()
   q.sql              := "select * from FISH"           := true

   u = new UpdateSet()
   u.source           := "FISHCOPY"
   u.destination      := q.rowset
   u.indexName        := "PrimaryName"
   u.changedTableName := "OLDROWS"


Deleting Rows Finally, you can take any rows that have the same structure as the main table, and delete any rows that match. The match will be on the expression used for the index.

   q = new query()
   q.sql              := "select * from FISH"           := true

   u = new UpdateSet()
   u.source           := "FISHCOPY"
   u.destination      := q.rowset
   u.indexName        := "PrimaryName"


Any rows that match in the source will be deleted in the destination. (There is no method to copy out the actually deleted rows to another table, however ... if you need to preserve a copy, you may wish to create a routine yourself to find matches between the two and copy the matching rows to another table before you run an updateSet delete ...)

Back to the menu

Finding Data

There are usually more ways to find data in a table than you can imagine. The problem is really in determining which is best for your particular circumstance.

The following includes a few methods of searching data in a table to find the row you need. If you need to filter the data so that all rows that match a specific condition are displayed, you should examine the section that discusses Filters in this document.

Back to the menu

The fastest way to find data that matches a specific criteria is with the findKey method of the rowset.

This is equivalent to the SEEK command and the SEEK() function in the XBase DML. As a matter of fact, it is exactly identical. To use it, the rowset must have an indexName property set, and the item you are searching for must be contained in the index expression.

This is what makes it so fast. To use it, as noted, you need to have an indexName property set, and then you can use findKey to attempt to find the value. For example, if your user enters the last name you could use findKey to see if that last name exists in the table:

   cName = trim( upper( form.entryfield1.value ) )
   if form.rowset.findKey( cName )
      // do something
      msgbox( "Could not find name " + cName )

Warning: FindKey() is a navigation command and it will trigger the canNavigate and onNavigate events of the rowset, and will cause an implicit save. If it fails, you are left on the same record you started on, but any changes made to the record will have been saved.

If you are attempting to look for duplicate keys you would want to use a second query that was used only for this purpose -- it would avoid the navigation issue in the primary instance of the query, and allow you to do the search ... See the "KeyMatch" discussion later in this document ...

An extremely useful feature here is that in addition to findKey, there is a findKeyNearest method. FindKeyNearest() works exactly like a "seek" when "SET NEAR" is set to "ON". You could use this instead of findKey to determine if there was anything close to what you were looking for. For example, you could, instead of using the code above, use findKeyNearest, and just move the row pointer to the record that most closely matches what the user entered:

   cName = trim( upper( form.entryfield1.value ) )
   form.rowset.findKeyNearest( cName )

If you have a grid displaying the table, what would happen is if there is an exact match, that will be displayed, otherwise the closest match will be found and displayed.

This particular method may have problems unless you can guarantee that you are working with a unique value in the field you are searching on.

Back to the menu

What if you need to search for a value that is not in the current index? Well, you could change the index, but that may not be what you need.

There are two methods that can be used to do a "locate".

The applyLocate method of the rowset can be used by itself, in code similar to what was shown in the findKey section:

   cName = trim( upper( form.entryfield1.value ) )
   if form.rowset.applyLocate( [:Last Name: = ']+cName+['] )
      // do something
      msgbox( "Could not find name " + cName )

The difference is that you must specify the whole condition (fieldname and so on) in the method call if used this way.

You can also do what is called a "Locate By Form" (sort of like Filter-By-Form as discussed later in this document). This is done by using a combination of beginLocate() and applyLocate().


This command will clear out all datalinked controls on the form, allowing the user to select the criteria to use for the "locate".


Will cause the actual "locate" to be fired, and an attempt will be made to find a match.

There is a property of the rowset that affects how the locate will function, whether you use the locate-by-form or applyLocate() by itself -- locateOptions. This property can have one of the following values:

   0 -- Match length and case // this is the default!
   1 -- Match partial length
   2 -- Ignore case
   3 -- Match partial length and ignore case

There is a drawback -- you cannot perform locates on anything other than the beginning of the field (same for filters). The next part of this topic covers a way to get around this over-site on the part of the developers.

The default of zero means that you are looking for an exact match.

Back to the menu

String Contained IN the Field
The locate and findKey abilities of the OODML are great, to an extent. They fall short when you need to search for a string contained anywhere in a field other than at the beginning of the field.

In the XBase DML, this is done using the "$" operator (contained in). There is no equivalent with the beginLocate and applyLocate methods in the OODML.

Do not despair ... There is a way to do this, but guess what? It means using the XBase DML. All you have to do is open a table using the old USE command (which of course still works), and use the NOUPDATE operator. This means that the table is read-only, so that you cannot write to it -- which you wouldn't want to do anyway.

What I have succesfully done is to open a table in the form's onOpen, and in the form's onClose event, I issue a close tables command. (Or you could be very specific and USE IN tablename.)

I usually place two buttons on a form with an entryfield. The entryfield lets the user enter the value to look for. The first button is a locate -- it will find the first occurance of the string to be searched for, the second button calls the XBase DML "CONTINUE" statement and continues searching if the specific record is not found.

There is a sample form that is in the same .ZIP as this HOW TO document called LOCATE.WFM that uses the following code. It assumes that the BDE Alias for the MUGS sample tables exists (as it should from the original installation of Visual dBASE 7.x, but if you don't have 7.x, then this sample will not have been installed -- the form won't work without it, but you can examine the code).

One very important point, which is why the code below appears a bit more complex than you might expect: since the XBase DML command to open the table a second time is being used, moving the record (or row) pointer in this instance of the table has no effect on the row pointer in the form (which is using the OODML, but even if you open a table twice in the XBase DML this happens). Because of this, we need to store a key value when we find a match, and then use the OODML method findKey() to actually move the row pointer on the form's instance of the table. If you read the code, hopefully it will make more sense ...

Another very important point is that for this to function properly you need an index that contains unique values (for this example, it uses the last name and the first name). If you do not, you may not ever find the correct record.

The code below is for the locate pushbutton (see the code at the beginning of the LOCATE.WFM -- it does some things to the table ...):

   function LocateButton_onClick
      // so that we're not constantly updating
      // the controls ...
      form.rowset.notifyControls := false

      // select the table that was "used" in 
      // the form's onOpen
      select customer

      // store value of entryfield:
      _app.lookFor = form.namefind.value

      // do the locate:
      go top
      locate for _app.lookFor.toUpperCase().rightTrim() $ ;
                 upper(customer->:last name: )

      // if we hit the eof() it's not there ...
      if eof() 
         skip -1 // so we're not at the end of file mark
         msgbox( "No match found!" )
         form.ContinueButton.enabled := false

      // save value of "key fields" so we can use
      // the findKey method of the rowset on the form ...
      // NOTE: this doesn't work without a unique 
      // key to search for -- in this case it requires
      // both first and last names:
      cKeyFields = upper( customer->:last name: +;
                         customer->:first name: ) 

      // move the pointer -- assumes that the index is
      // set properly:
      form.rowset.findKey( cKeyFields )

      // now we DO want to notify the controls:
      form.rowset.notifyControls := true
      form.ContinueButton.enabled := true // so user can "continue"

And the following code would be used for the "continue" pushbutton:

   function ContinueButton_onClick
      // so that we're not constantly updating
      // the controls ...
      form.rowset.notifyControls := false

      // select the table that was "used" in 
      // the form's onOpen
      select customer

      // store value of entryfield:
      _app.lookFor = form.namefind.value

      // 'continue' the locate:

      // if we hit the eof() it's not there ...
      if eof() 
         skip -1 // so we're not at the end of file mark
         msgbox( "No match found!" )
         form.ContinueButton.enabled := false

      // save value of "key fields" so we can use
      // the findKey method of the rowset on the form ...
      // NOTE: this doesn't work without a unique 
      // key to search for -- in this case it requires
      // both first and last names:
      cKeyFields = upper( customer->:last name: +;
                         customer->:first name: ) 

      // move the pointer -- assumes that the index is
      // set properly:
      form.rowset.findKey( cKeyFields )

      // now we DO want to notify the controls:
      form.rowset.notifyControls := true

You will see that much of the code is duplicated. An enterprising developer could probably make all of that a single method that was called from two different pushbuttons and depending on which was actually calling the method, it would either execute the locate or the continue statements ...

NOTE: Gary White did this another way. There is a form included in this .HOW called LOCATE2.WFM -- this is Gary's version of doing the locate/continue ... It works quite well (as one might expect of Gary's code).

Back to the menu

Other Methods of Finding Data
There are always other ways to find records. You can search yourself performing a row-by-row search, using code similar to that below, but you should turn off the notifyControls property of the form first (otherwise each call to the "" will attempt to update the form which slows down the processing, and causes a nasty blinking problem on screen):

   do while not form.rowset.endOfSet
      if form.rowset.fields["lastname"].value == "something"
         // found it:

This can be a bit slow, but it may be necessary in some circumstances ...

Back to the menu

The XBase DML KeyMatch() function does not have an exact duplicate in the OODML. It's purpose, as you may recall, is to search in an index tag to see if a specific value exists in the index tag. The handy thing of course being that it does not move the record pointer when it does this.

However, Gary White created the following routine which basically does the same type of code. For it to work, you must provide the table name, the index tag name, and the value you wish to search for (the following is an example of it's usage):

   if keyExists( <yourtable>, <theIndexTag>, <keyValue> )

The actual function looks like:

   function keyExists
      parameters cTable, cIndex, cKey
      local q, lFound
      q=new query()
      q.sql="select * from &cTable." = true
      q.rowset.indexName = cIndex
      lFound = q.rowset.findKey( cKey ) = false
      release object q
   return lFound

Briefly, the code opens a new query (local to the function), searches for the key value using the findKey method of the rowset, and if found, returns a true value, otherwise returns a false (and then cleans up after itself) ...

Back to the menu


Setting up filters in dB2K has gotten interesting. And even more interesting is that if you examine the options, you will see that the rowset object has several different ways to filter your data.

The biggest problem is that people often end up trying to combine these, and make their lives a lot more difficult. This may be because if you look up "SET FILTER" in the online help, it suggests that you use "applyFilter()/beginFilter()" which is not necessarily the way you want to do this.

To put it bluntly, it is not necessary, and actually sometimes will cause problems, to try to use the filter parameter with the beginFilter and/or applyFilter methods. Seriously ... If nothing else you will fry your own synapses.

So, what are these options, and how do they work?

The Filter Property
This is the one that I recommend you use when just programmatically setting your own filters. There are some drawbacks, which we'll examine, but this is really the best way to go in most cases.

The filter property takes a SQL filter -- i.e., something you might use in a WHERE clause, and more importantly, it MUST BE A STRING. (And should not contain dBASE functions!!!) Unfortunately, the filter property is not complete, in that while it effectively must be SQL, it cannot use the LIKE option's wildcards.

How, pray tell, do you end up filtering your table on dates, numerics, and logicals, then?

Take a look at the following:


      form.rowset.filter = [MYDATEFIELD = '] + date() +[']

What is with the use of the square bracket characters? These are delimiters in dB2K -- ones that most people don't use, because the single and double quotes have always been "good enough". However, if you want to design a string that has quotes inside it, using the double and single quotes can get very difficult to read. Using the brackets makes it much easier to read and offers some flexibility.

What does this do? If today's date is: 12/07/1998 (which it is at the time I am writing this), your filter string would be converted to look like:

      [MYDATEFIELD ='12/07/1998']

The BDE can understand this just fine, and when your table is examined, all should be well.

PROBLEM: SQL itself does not understand non-US date formats. This is not a BDE issue, nor is it a Visual dBASE issue ... This means you need to use the US/American date format of MM/DD/YYYY. (There is more detail on dates and date formats in XBase to OODML in the Knowledgebase)

Numerics are just the same. You must convert them to strings (but don't panic -- dB2K does what is called "automatic casting" when working with character strings -- if you "add" a character string and a numeric, date, or logical you will not get errors, like you used to in earlier versions of dBASE -- instead, dB2K will convert that numeric, date, or logical value to a character string ...

      [MYNUMERICFIELD = '] + somenumber +[']

Should work fine.

Logicals must use the words TRUE or FALSE. That's the only big deal.


Back to the menu

Filter By Form
This is a little-known ability to most XBasers, and most people haven't quite gotten their minds around it (partially because in older versions of dBASE this kind of thing was a lot of work to do ...).

The idea is that you can create a form with all the datalinks you need, and allow the user to enter what they want to search for in those fields. (There is one drawback -- you cannot use "or" type filters this way -- ALL fields that have values in them must "match" or the filter does not work ...)

This is done by using two methods: beginFilter() and applyFilter(). The first method (beginFilter() ) clears out the controls on the form, allowing the user to enter what they need to do their filter. The applyFilter() method actually executes this filter.

The applyFilter() method returns a logical value if it found any matches or not -- true means it did, and false means it did not. So you could code something like:

   if form.rowset.applyFilter()
      // do something
      msgbox( "No Match Found" )

Another method you should be aware of, that was designed to work with these other two methods is clearFilter() -- which clears out the current settings.

As a side-note: clearFilter() sometimes works with the filter property, but it is not reliable there. To clear out the filter property, you should simply assign a blank string ("") to it ...

Back to the menu

Using beginFilter()/applyFilter() Programmatically
It is possible to use these methods programmatically as well as in the Filter by Form option noted above.

For a simple search, you can use applyFilter() with the appropriate criteria in the parens:

   form.rowset.applyFilter( form.rowset.fields["fieldname"].value = "something" )

The applyFilter method actually will store the necessary information into the filter property as a string:

   ? form.rowset.filter
     // should show:
   fieldname = "something"

You can augment the filter's property at this point by using:

   form.rowset.filter += [ AND Color = "Green"]

Do not forget to put a space after the open bracket and before the word "AND". (You can use the 'OR' operator here as well ...)

This can be unwieldy and if you wanted to do a more complex filter, this would not be the way to go.

You could do something like:

   form.rowset.fields["fieldname1"].value = "something"
   form.rowset.fields["fieldname2"].value => "something else"
   form.rowset.fields["somedate"].value   <= date()

Note however that all three of the conditions given above would have to evaluate to true for a row to meet the condition of the filter.

Back to the menu

The filterOptions Property
The filter in the rowset, whether you use the filter property, or the beginFilter/applyFilter methods, uses the filterOptions property of the rowset. This allows you to determine if the filter is case sensitive, length sensitive or a combination. There are four options:

   0 -- Match length and case // this is the default!
   1 -- Match partial length
   2 -- Ignore case
   3 -- Match partial length and ignore case

One thing to note or remind you of -- the filterOption property defaults to 0 - "Match length and case", and you may want to be sure you change that for your filter to be more flexible. It means an exact match must be found for the filter to return a true value. See online help for more details. These are rather useful.

Back to the menu

Filtering on Substrings and Complex Filters
As noted in the OODML section of this document, you cannot do a locate on a string contained in another string. The same problem exists in filters. In addition, creating complex filters, or filters that use dBASE functions are not easy with the OODML filter property and methods.

Luckily, the developers at Inprise (now dBASE, Inc.) realized this and came up with the canGetRow event of the rowset.

This event is used to evaluate each row in a rowset based on whatever criteria you give it, and will return a row only if it matches said criteria. (Note: for large tables this can be very slow and you may want to consider if you would rather use the SQL SELECT ... WHERE clause -- discussed below)

The canGetRow event belongs to the rowset, and can be used either with a codeblock:

   form.rowset.canGetRow = {|| return SomeLogicalExpression }

or it can be used as a function that returns a logical value (the example below assumes a date field where you want dates that fall between a specific range, perhaps determined by the user):

   function rowset_canGetRow
      // set your conditions here:
      lReturn = this.fields["DateField"].value => form.beginDate
      lReturn = lReturn and;
                this.fields["DateField"].value <= form.endDate
   return lReturn

Back to the menu

Before we leave the topic of filters, it is possible to do substring filters using the SQL Select statement used in your query's SQL property.

Using the WHERE clause of the SQL SELECT statement, you can use options such as "LIKE" -- which uses wildcard characters. An example might be to find all the names in a table that contain the letters "as" in any specific location. To do this, you would set your sql property to read:

   queryname.sql = [select * from customer where upper(customer."Last Name") like "%AS%"]

Notice that SQL can use its own UPPER() function. The "where" clause here is comparing the uppercase value of the "Last Name" field in the "customer" table (note the syntax -- if the field name does not have a space, you do not need the quotes -- but it is probably a good idea to use them anyway) against the value 'like "%AS%"'. What the "LIKE" option does is use wildcards. The "%" character is a wildcard, and when used at the beginning of a string, it says to match anything up to the literal value (in this case "AS"). When used at the end of a string, it means that the match must start with the literal value ("AS") and end with anything. When used in combination (as shown) it means any string that contains the literal value anywhere in it. A good book on SQL will show you all the permutations of wildcards, and they appear to work here (via the BDE).

More importantly, you may, as a developer, need to pass a filter value -- some string you need to limit the rowset to, usually as a parameter of some sort. To do this, you need to concatenate the filter string into the same sort of statement as above:

   queryname.sql = [select * from customer where upper(customer."Last Name") ]+;
                   [like "%]+cFilter+[%"]

One special caveat, Gary White, while testing this, found that while in a normal SELECT statement, you can (and sometimes should) place the table extension into the SELECT statement (i.e., "select * from customer.dbf"), when using the WHERE clause as we are here, for whatever reason, the statement will not work properly -- but if you leave off the extension, the exact same statement will function properly.

Gary has provided a sample form, called CUSTOMER.WFM, that is included. It shows that using the WHERE clause to filter the data is VERY fast. When you run it, enter a value into the entryfield for some value you wish to filter the data on (must be in the last name), and click the 'run' button. The form will reassert the SQL statement for the query, and then show the elapsed time in seconds.

A prime example of where you might want to use this ability is the lookupSQL property of the field object.

Back to the menu

The setRange()/clearRange() Methods

In the XBase DML we have SET KEY, which is a very fast method of limiting the values shown in a table to those where a field fits within a range of values.

The OODML version of this is the rowset's setRange() method. This can be used to limit a rowset to a range, with a lower value and an optional upper value. The fun part is that you can specify any combination of lower value and upper value.

The important thing here is that setRange (like SET KEY) works off the current index expression. If the range you set does not match the expression, you will either receive an error, or it will not work ...

If you want to check for a date range where the dates must all match a specific date (if your index expression uses the DTOS() function you will need to use it here as well):

   form.rowset.setRange( {01/01/1998} )

This will limit your rowset to only the rows where the date field used in your index expression exactly match that date.

More useful, however, is the ability to provide a lower, upper or lower and upper range. The following examples show each of these. To start, let's provide a lower range:

   form.rowset.setRange( {01/01/1998}, NULL )

The use of "NULL" in the "upper" position is all that is necessary to tell dBASE to limit the rowset to only rows where the value in the date field are equal or greater than the date shown.

   form.rowset.setRange( NULL, {01/01/1998} )

In this example, only the rows where the value in the date field are less than or equal to the date shown will be displayed.

Finally, a range:

   form.rowset.setRange( {01/01/1998}, {10/31/1998} )

This will limit the dates to ones that are greater than or equal to (=>) the lower value, and less than or equal to (<=) the upper value.

You can use numeric, character, date, datetime, whatever values that provide a range, and that you use in your index.

One thing some developers have found to be very useful is using setRange() to do one part of a filter (it's very quick), and then using the filter property of the rowset to further limit the data. The combination can be quite flexible, fast and powerful.

Once you have a range set, it is useful to be able to clear it out. This is done simply by using the clearRange method:


Back to the menu

Using the Params Property of the Query
Yet another way of filtering your data dynamically is to use the Params property of the query object -- which is actually an associative array. The indices of the associative array are the parameter names used in the SQL Select statement.

I think the difficulty with using this lies in the sequence in which everything must be defined.

You have to place the actual parameters for the SQL Select into the SELECT statement before you define them as part of the query, and you do this definition before you activate the query.

Hence, the sequence looks like:

   q = new Query()
   q.sql = "select * from tablename WHERE fieldname = :parameter"
   q.params["parameter"] = "somevalue"

Note that the above is pseudocode and won't run by itself. The parameter in the SQL select statement above is defined by placing a colon (:) in front of the parameter name.

A working example would be to define a pair of parameters as beginning and ending dates, perhaps with the MUGS database, in the Invoice table, where you want a list of invoices between two date ranges. The code shown below assumes that the values assigned to the variables "InvoiceDateBegin" and "InvoiceDateEnd" are defined, possibly in a form where the user selects the dates ... (you should be able to copy and run the code below):

   // as noted, normally these might come from
   // some form ...
   InvoiceDateBegin = {12/10/97}
   InvoiceDateEnd   = {12/15/97}

   // create database and activate it
   d = new Database()
   d.databaseName := "MUGS"       := true

   // create query
   qInvoice = new Query()
   // create database reference
   qInvoice.database := d
   // create query with parameter references
   qInvoice.SQL := [SELECT * FROM invoice WHERE ]+;
         [invoice."order date" >= :dBegin AND ]+;
         [invoice."order date" <= :dEnd]
   // define parameters associative array
   qInvoice.params["dBegin"] = InvoiceDateBegin
   qInvoice.params["dEnd"]   = InvoiceDateEnd
   // activate query := true
   // do something ... 
   if qInvoice.rowset.endOfSet
      msgbox( "No matching rows in range ...", "No match", 16 )
      msgbox( "Matches: "+qInvoice.rowset.count(), "Matches found", 64 )

   // cleanup: := false := false
   release object qInvoice, d

NOTE: if the values in the params change through code, you can use the query object's requery() method to re-evaluate the query ( using example above, qInvoice.requery() ).

An example of this is included in the .ZIP file. The form that is used is PARAMS.WFM.

Back to the menu

Coding Methods in Forms and Reports

When people first start creating forms and reports in dB2K, they get a bit confused. If you decide to add code to an event, you create a function. If you decide to write your own method, you create a function. However, both of these functions are methods. This is sometimes one of the hardest things for a procedural programmer to grasp.

To put it simply, Events are programming hooks for the developer. Any functions, or procedures that appear between the beginning CLASS and the ending ENDCLASS are methods. If you write code to respond to a pushbutton's onClick event, that code will likely begin with the FUNCTION keyword. Regardless of that keyword, your code is a "method" of the form. Methods that respond to the built-in events (onOpen, onClick, etc.) are sometimes also referred to as "event handlers" but they are still methods.

Calling Methods
When working with methods that are contained in the form class definition (before the ENDCLASS statement), there are different ways to call a method.

If you call a method from a Control, e.g., a Pushbutton in the constructor for the control, the code will look like:

   CLASS::MyFunction   // NB -- No brackets

For example, if you were using the onClick event of a pushbutton, and all the defaults were used, you might see in your form's source code:

   this.Pushbutton1 = new pushbutton( this )
   with( this )
      onClick = CLASS::Pushbutton1_onClick
      left = 1.0
      top = 1.15
      text = "Push Me"

However, if you wish to call (execute) this method from another method in the same form or class definition:

   CLASS::MyFunction() //NB -- brackets

Now, the big question comes up about passing parameters.

If you need to pass parameters to a method, you need to determine a few things. Some code, such as a pushbutton's onClick isn't really designed to handle parameters. In the constructor for the pushbutton, you cannot define the call with parameters without a serious drawback: as soon as the form is opened, the code will be fired because in order to call the method with parameters, you must have parenthesis, and adding parenthesis to the name of a method in the constructor code tells dBASE to call that code and execute it right then and there.

So ... should you design your methods to call parameters?

If the methods are used as calls from events of objects on a form -- I recommend not doing so. Why would you want to pass parameters to these methods? If they are using the values that can be obtained from either the data objects or the form objects, you can simply reference those in the method itself.

One example was presented to me for this -- that is two pushbuttons needing to call the same method to execute the code, but to do different things in the method. The simple solution is to use the name property of the object calling the method, and modify the code so that based on which object called the method ... something like:

   function Pushbutton1_onClick
      if == "PUSHBUTTON1"
         // do some code for pushbutton1
         // do code for pushbutton2
      // do code for both buttons

On the other hand, if your methods need parameters and you are not using them for event handlers, then by all means, go for it ...

Shortening References
One thing that can make your life a lot easier in all of your code, particularly in forms and reports (but the technique I am about to discuss can be used in a program that uses the data objects), is to shorten your object references.

A very confusing feature of the OODML is the need to get to a field via the query and the rowset:

       // or 
       // or 
       // etc.

In your methods, you can shorten your references. If you are going to be manipulating the rowset:

    r = form.rowset
       // or 
    r = form.query2.rowset
       // or 
    r = form.datamodref1.ref.query1.rowset
       // etc.

This allows you to do whatever manipulation you need, or to use the events and/or methods of the rowset, without having to constantly type out these longer references, such as:

If you are using the fields array only, you can shorten that along the same lines:

    f = form.rowset.fields
       // or 
    f = form.query2.rowset.fields
       // or 
    f = form.datamodref1.ref.query1.rowset.fields
       // etc.

Doing the above allows you to reference the fields in the rowset much more easily:

    f["fieldName"].value = "something"

One further advantage to this is that it can actually speed up long processes, as dB2K is not having to resolve the longer references each time it sees them.

When working in a form, you can also shorten references to containers:

   fn = Form.Notebook1
   fn.Entryfield1.value := "something"

This is very useful in some code. If you are working with, say, a container within a container, the long references can get to be frustrating, and you could get carpal tunnel syndrome just typing all those references ...

Back to the menu

Relating Tables

dB2K is, among other things, a relational database development tool. A lot of folk get confused in setting up related tables to function properly in dB2K.

This is, in dB2K, one of the simplest things to do, but it's not covered in a lot of detail in the online help and the Developer's Guide, probably because the documentation folk figured it was easy.

If you are working with local tables, there are two things you need to know: masterRowset and masterFields. That's it. If you are working with SQL Server tables (Interbase, Oracle, etc.), you only need to know: masterSource.

An Example
If you wish to design a datamodule or a form (or a report) that uses multiple tables that are related, you need to know the relationship. A prime example is perhaps in an Invoice system -- you might want to have a form for entering Invoices. The invoice table would be the parent or primary table. A customer table would be linked in, but more importantly, a line-item table (each item for the invoice) would be necessary, and this would be a child or detail table.

The following is a step-by-step using the tables in the MUGS database alias that is set up with the BDE. If you do not have this alias set up, you may wish to go back and do this. (If not sure how, see below -- a section of this document covers setting up BDE Aliases.) This example uses tables that already exist, and actually there's a working datamodule already set up -- we want to do a step-by-step to do it ourselves ...

In the Navigator, click on the "Tables" tab. In the "Look In" combobox, select the alias for "MUGS". You should now see several tables.

Now click on the "Datamodule" tab. Double click the "Untitled" icon to create a new datamodule.

When the design surface appears, go back to the navigator and click on the "Tables" tab again.

Drag the "Invoice" and "LineItem" tables to the design surface (notice that a database object is brought over with the first table -- this is supposed to happen, since we are using a BDE Alias ...).

Click on the query for the "Invoice" (the Parent) table, and select "rowset" in the inspector. Click on the "I" button (Inspect), and you will see rowset properties. Click on the indexName property, and in the combobox select "Invoice ID" for the indexName.

Click on the query for the "LineItem" table, and select "rowset" in the inspector. Click on the "I" button. Click on the masterRowset property, and in the combobox select "invoice1". This has now told the designer that the master rowset (or parent) for this relationship is the Invoice rowset. We need to set the linking field, so select masterFields, and in the combobox select "Invoice ID".

At this point, we have a working one to many relationship. (In the datamodule it may not look like much, but it's true!) Save the datamodule and exit the designer (CTRL+W works for this -- give it a name like "TESTING" or something). We'll come back to this in a moment.

What about constraints? Well, believe it or not, it's all automatic. There is no need to tell dBASE that you cannot go past the last child row for a specific parent -- it will not allow you to. (See discussion on SET SKIP elsewhere in this document, if you need to do something like that.)

What about using SQL Server tables (Interbase, Oracle, etc.)? These are actually easier, if you can believe it. Most SQL Server tables handle relationships easier than dBASE tables. Rather than using the masterRowset and masterFields properties of the rowset for the child (LineItem) table, you would use the query object of the child table, and the masterSource property. When you point the masterSource at the parent table, everything else is done for you. However, that only works properly for SQL Server tables. If you are using local tables, you need to use the method described above.

Now, to see this datamodule in action, we'll create a very quick form:

Click on "Forms" in the navigator, and double click the "Untitled" icon to create a new form (if asked about the Wizard, select the Design button). With a new form on the screen, select the navigator, and select the "Datamodule" tab and drag the new datamodule to the design surface.

Notice that the field palette automatically has both tables showing, and the fields from both tables.

Drag some fields for the Invoice table to the design surface. However, let's use a grid for the line items. To do this, drag the grid onto the form, and set the datalink to the LineItem table. (You will need to play with the size of the form and the grid a bit.) Make sure you set the dataLink property of the grid to the Invoice table (it will appear as INVOICE1 in the combobox in the inspector). You may want to set the columns that are displayed, but for the moment, leave the Invoice ID.

Save and run the form. When you navigate in the parent table, notice that the rows displayed in the grid change, and that they all have the correct Invoice ID. This is automatic ... If you were to add a new row to the Line Item table, it would automatically be assigned the Invoice ID for you.

Hopefully this will get you over any major hurdles you have getting started with this kind of relational tables scenario ... This is a simple example, and I realize that there are a lot of situations that can get more complex, but the concept is the same even for those more complex situations.

NOTE: The linking index for a child/detail rowset cannot be a primary index. This is because you need to be able to have multiple child rows in a table (for your own application this may not be a requirement, but the data objects do not understand that), and by definition a primary index stores only unique keys. By the same definition, a parent/master rowset cannot have multiple key fields, so you could use a primary index for the parent or master rowset ...
Back to the menu

Multiple Field Links

Some of you may have tinkered with the masterRowset/masterFields properties a bit, and discovered that with .DBF tables you cannot set a masterFields property to work with more than one field.

Don't despair. It can be done, but just not in an obvious fashion.

This particular trick comes in two parts. The first is to make sure you have an index in your detail table that uses the fields you need in sequence, for example you may need to link your tables on an Invoice Number and a Part Number. Your index expression might be:

   str( :invoice number: ) + str( :part number: )

In the datamodule, form, or report you are setting up this relationship for, make sure that the detail rowset's indexName property uses this index tag.

The second part of this trick is to create a calculated field for the master rowset that uses the same expression. To do this, make sure that the master query has focus (click on the query icon), and in the inspector, click on the "Events" tab. Find the onOpen event, and click on the tool button. This will bring up the source editor. In the source editor you would enter something like the following:

    function query1_onOpen
       // create a field object:
       f = new Field()
       // give it a name:
       f.fieldName := "Invoice and Part"
       // "this" in the following statement refers to the 
       // field object -- we are telling it what the
       // calculation is to be, and it must be defined
       // in the beforeGetValue event -- note that the
       // codeblock itself must be one line -- remove the 
       // carriage return and semi-colin:
       f.beforeGetValue := ;
          {|| str( this.parent["invoice number"].value ) + ;
              str( this.parent["part number"].value ) }
       // "this" in the following statement refers to the query:
       this.rowset.fields.add( f )

Just to be sure you understand the above code ... we are creating a field object. The field object must have a name, and it must have a beforeGetValue that tells what the calculation is to be. The hard part of the code above is understanding what "this" is in each place. In the codeblock shown, "this" refers to the field object (we are defining code that will be executed for the field before the values are retrieved from the table). So, "this.parent" refers to the fields array. In the last statement, "this" refers to the query that is executing the "onOpen" code. In this one, we are adding this new field object to the fields array. Hopefully that helps.

At this point in time, this code has not been executed, so the new field will not show up in any field lists anywhere, and the datamodule, form and report designers won't have an idea that it "exists". Close down the designer (<Ctrl>+W), and immediately open it up again. This should execute the onOpen event code for the master table's query, and the field will now "exist".

Click on the detail query's icon, to give it focus, then drill down to the rowset (click on rowset in the inspector and click the tool button). Click on the masterRowset property, and select the master table's query as it appears in the combobox. Next, click on the masterFields property, and select the new calculated field.

If all went well, you should be set and ready to roll.

Thanks to Todd Kreuter who posted an abbreviated form of this in the newsgroups. I had forgotten this technique, never having had to use it ...

Back to the menu

Adding Parent Rows to Parent/Child Tables
One problem that you might see if you use a masterFields/masterRowset setup for parent/child tables, is that if you add a parent table, you may see it disappear immediately in your form. This is due to default constraints in the query objects. A parent row normally MUST have a child row.

So, how do you ensure that you have a child row? I sweated over this one when I first started tinkering with dB2K (as I had an applicaton that required this -- each parent must hav a child row), and spent some time working with both the developers and one of the folk in the docs team, and came up with the following solution. If you are using datamodules, you should set this code in the datamodule ...

First, you need to use the parent rowset's canSave event, which will set the following custom property (this event will fire before the actually save of the parent row, which is very important as we have to know the rowset's state before the save):

   function rowset_canSave
      this.parentAppend = ;
             ( this.state == 3 ) // append

The next part is to use the parent rowset's onSave event -- this fires after the code associated with the save() method fires:

   function rowset_onSave
      if this.parentAppend
         // change "childquery" to the name of the
         // query object for the child rowset
         r = this.parent.parent.childquery.rowset
         // now we add a child row:
         //set the key field value:
         r.fields["keyfield"].value := ;
         // this is a good place to set any
         // default values you may need as well.
         // Now we save the child row so that it
         // exists:
      // reset this value:
      this.parentAppend = false
      // and save the parent row -- this happens
      // automatically, so no code is required!
   return true // must return true!

After all that, you may want to add some code in your form that forces the user to the child rowset and the new (effectively blank) row, so that they can add whatever is necessary to the child row (put them into edit mode for the child row with beginEdit() for example).

Another note on this -- if you set defaults for the child rowset, you will want to add an onAppend event for the child that sets the defaults there as well, since this method only gets called when the parent is being appended to.

Back to the menu

Cascade Deletes
When you work with the parent/child or master/detail relationship, as noted in the section on adding a parent record above, there is another issue to be concerned with -- that is deleting a parent row, and having the child rows left behind (this is called "orphaning" the child rows).

If you delete the parent, you have no reference to the child rows, they are not available to do anything with (this is once again due to the constraint in the masterRowset/masterFields settings). You could create a routine that deleted all your orphaned rows, but this is not necessary if you do the following.

You need to use an event in the parent row - this one would be canDelete. The reason to use canDelete is the same reason you would want to use canSave in the code in the previous section of this document -- it fires before the delete() method of the parent.

   Function rowset_canDelete
      // this assumes a datamodule, for a form
      // with the queries placed directly on it,
      // you could use "form.childquery.rowset"
      r = this.parent.parent.childquery.rowset

      // loop until we get to the end of the child rowset --
      // don't panic -- the child rowset is constrained
      // by the key field in the parent rowset, and only
      // the child rows that match the current parent
      // are seen here:
      do while not r.endOfSet
         // delete it:
      // that's it - the child rows are deleted, when
      // this method is done, the next event to fire
      // is the delete on the parent, and then if you
      // had an onDelete call, that would fire ...
   return true

It's really that simple.

Back to the menu

Multiple Rowsets on Forms
When you have multiple rowsets on forms life can be interesting, as we've already noted. However, one other thing you may wish to consider is how do you want to lay the form out?

There are several options (these are the ones I can think of off the top of my head):

Of all of these, the last is the least efficient, and the first can, as noted, be rather cluttered.

The second (the child table in a grid) is something you might want to consider, but the grid is not (yet) very good at handling a lot of input. A lot of features for entryfields, comboboxes and checkboxes are not implemented in the grid's version of the same controls.

A property of the form you should always keep in mind is the rowset property -- this is used to point to the "active" rowset -- the one the form thinks is the current rowset. This can be useful, as a lot of custom controls that people create assume that they are working on the form's rowset, which can be referenced simply as "form.rowset".

If you are using a datamodule, it's much easier to use:

than it is to use:

Creating code to deal with the form's rowset is a snap ... and it means you can create some pretty generic code.

Single Page of a Form With Multiple Rowsets
One thing to consider when doing all this is how you want to set your controls for your forms.

By default, the standard controls for navigating and such that ship with Visual dBASE (in assume "form.rowset". This means one rowset is the one that these will affect.

You could modify the code, you could create a whole new set of controls ... or,

The Single Pushbutton Method
You could find a way to change the form's rowset pointer, so that when you need to, you can change rowsets by a click of the mouse.

In the case of a single screen with two rowsets on it, you could place a pushbutton on the form. The pushbutton would need to have the toggle property set to true. This makes the pushbutton a "two-state" button. It will stay either UP or DOWN depending on its previous state. When the pushbutton is "up", the value property is false, when it is down, the value property is true.

The onClick event would need to be set to do something like the following:

      function Pushbutton1_onClick
         // if the button is down:
         if this.value
            // set the text to what will happen when the button
            // is clicked:
            this.text := "Parent Rowset"
            // set the rowset to the child rowset:
            form.rowset := form.childquery.rowset
            // do the opposite:
            this.text := "Child Rowset"
            form.rowset := form.parentquery.rowset

This is cute, and it actually works pretty well ... and for a simple form is a good way to go, except that you have to train your users to use the button ... (You might have to do something where you switch colors of text controls or something so that it is more apparent which half of the screen is in control ...)

The Multiple Pushbutton Method
This one is trickier, but doable. It means having a complete set of pushbuttons for the second rowset ... I don't recommend it. I've done it in the past, the amount of work is annoying.

The Modified Code for Pushbuttons Method
Another method, suggested by Peter Rorlick, is to modify your navigation, editing, etc., buttons so that you can change the rowset reference for the buttons. An example of this is the pushbuttons in the dUFLP library in the Knowledgebase, in CUSTBUTT.CC. These buttons have a custom "rowset" property which can be set. Even better, the containers that are in this file have the same property and a simple event to change the rowset property for all the contained buttons. Details are in that file, and I'd rather not try to explain it all here ...

Multiple Page Form With Multiple Rowsets
This is actually pretty easy. In most forms that work with multiple pages, a tabbox control is used to switch pages (although it could be done with pushbuttons or a bunch of other ways as well).

In the tabbox's onSelChange you just need to see which page the user selected (use the curSel property), and set the form's rowset property appropriately:

      function tabbox1_onSelChange
         do case
            case this.curSel == 1
                 form.rowset := form.query1.rowset
            case this.curSel == 2
                 form.rowset := form.query2.rowset
            // this leaves room for more if needed
         form.pageNo := this.curSel

If you use other methods like pushbuttons, the idea would be the same -- make sure you set the form's rowset property, and also the pageNo property ...

Using a Notebook Control With Multiple Rowsets
This can be a nice way to do things. The Notebook control is pretty easy to use, and you could actually use code very similar to the code shown for the tabbox above, except you would not need to change the form's pageNo property.

Back to the menu

Navigating Tips with Rowsets

Allowing your users to navigate through a rowset is of course desirable, but ... allowing them to navigate while they are currently editing or appending data may not be. dBASE will automatically save a row if a user tries to navigate (including using the <PgUp> or <PgDn> keys, and any other method you can think of to navigate in a rowset).

You may want to avoid this by using code like the following assigned to the canNavigate event of the rowset (Gary worked out this code for the Visual dBASE Tutorial which is in the Knowledgebase ...):

   function rowset_canNavigate
          Check to save any changes before allowing navigation.
          For whatever reason, this gets called twice. If the
          user chooses Yes to Save, or No to abandon, then
          the next time through the rowset.modified property is
          false so there's no problem. If the user chooses to
          cancel the navigation, they'll get prompted twice.
          For that reason, we set a flag to indicate it's been
          here once. If the flag exists, we null it, if not we 
          run the validation.

          BIG NOTE: If a future version of VdB corrects the
          behavior so the canNavigate routine does NOT fire
          twice, this code will not work correctly.

          This was written for Visual dBASE 7.01 build 1419

          Code by Gary White

      local bYes, nAnswer
      if type("_app.cancelNav") == "L" and _app.cancelNav
         _app.cancelNav := null
         bYes = false
         bYes = true
         if this.modified
            nAnswer = msgbox("Save changes before leaving record?",;
                             "Data has changed",32+3)
            do case
               case nAnswer == 6 // Yes
               case nAnswer == 7 // No
               otherwise         // Cancel
                  bYes := false
                  _app.cancelNav = true
   return bYes

This code will check to see if the rowset was modified before the navigation is allowed to occur. If yes, it asks the user if they wish to save their changes. If they click "Yes", the changes are saved, if they click "No", the changes are not saved, and if they click the "Cancel" button, the navigation is abandoned completely.

If you have a form with related tables, this exact same code should be used for each table's canNavigate event. The nice thing is that if you are editing (or adding) a child row and attempt to navigate in the parent row, you will be asked about saving changes in the child row (it won't state which rowset, but the message will appear asking -- I've successfully used this technique ...).

Back to the menu

Row in use by another. Retrying lock ...

The chances are if you've done much with dB2K you've seen this error, or your users have.

The most common reason this error occurs is that not all of your queries, databases or datamodules are deactivated. When a form that is using one of these closes, you need to see about deactivating it. If you do not and you use another form or run a program that needs the table or tables referenced, you are likely to run into this error again.

The difficulty lies in finding a good way to ensure that the queries, databases and datamodules are all deactivated.

One method of handling this is to deactivate them in the form's canClose event. Set the active property to false for each control needed, something along the following lines. Note that you should do it in proper "container" sequence, starting with the query or queries, the database(s) (if any), and then the datamodules (if any):

    function form_canClose
       // assumes a datamodule: := false
       // if no datamodule, then you need to 
       // set each data object's active property
       // to false -- i.e.,:
       // := false
       // := false
       // := false

However, at least one user on the dBASE Newsgroups has discovered, to their chagrin, that even this did not solve the problem.

In the case of a set of tables that use the masterRowset/masterFields properties to link tables, sometimes the child table may still be locked. Even if you set the query reference to false, it won't make a difference -- the query will still be active because of the locks.

You may want to try something along these lines:

    function form_canClose
       // unlock the child rowset
       form.datamodref1.ref.query2.rowset.unlock() := false

While this is not a "guaranteed" way to resolve the issue, it should help. If anyone comes up with "a better way", be sure to let me know ... (in my limited testing this does appear to work ...)

In dB2K this has been fixed, and parent/child relationships handle internal locking properly.

Back to the menu

BDE Aliases

There's a lot of talk in the newsgroups about using aliases, but not a lot of explanation on how to set them up and actually use them. So, this is meant as a short description ...

A BDE Alias is used in two ways. If you are working with SQL Server databases (Interbase, etc.), you use the BDE Alias as a pointer to that database. If you are using local tables, you use the BDE Alias to treat those local tables as if they were a SQL Server database.

Why would you want to use a BDE alias? The following reasons come immediately to mind, and there are probably others ...

Setting Up An Alias
The first step is to actually create a BDE alias. To do this, you must bring up the BDE Administrator (there should be an icon in the folder for dB2K -- double click on it).

In the BDE Administrator, click on the "Databases" tab. The treeview may appear as closed, so click on the "+". Depending on your setup, you may or may not see a bunch of aliases already set in the Administrator. (The following example is for Visual dBASE 7.x, but the concepts are good across the board for later versions of the software -- dB2K does not use the MUGS example, but you will see DQUERY and some other aliases.) When you install Visual dBASE several aliases are created for you, for example "MUGS" and "VDBSAMPLE". Click on "MUGS" if it is there. On the right side of the screen you should see four entries:

This is the important information for your alias. The "Type" should be "Standard" for local tables, and you may see (depending on your installation) other types if you check other aliases - these refer to the table's server and/or the driver. If you are using dBASE tables, make sure the default driver is "DBASE". You probably don't need BCD, don't worry about it (check online help if you are curious). The "Path" is the actual location of the tables.

NOTE: If the "Type" is something other than "Standard", there are a bunch of other options that may appear -- you will need to examine those and have some idea what they mean for that specific server or driver.

To create a new alias, on the left half of the BDE Administrator screen, right click and select "New ...". Select "Standard" unless you need to select, say, "Interbase", and click "OK". Set the Default Driver to "DBASE" (unless you are using PARADOX tables), and then click on the "..." button for the path. This will bring up a standard path dialog ... select the path to your tables. You probably want to give this a different name -- on the left side of the screen, click on "STANDARD1" and type a new name there (like "MyAlias").

At this point, you have the alias created in the BDE Administrator. However, you will want to "Apply" it (CTRL+A) and then exit the administrator. This saves the information to your .CFG file for the BDE.

If dB2K was up at the time you did this, you should exit it and restart it ... otherwise the new alias will not appear.

Using a BDE Alias
To use a BDE Alias, you start Visual dBASE, and in the Navigator select the "Tables" tab. In the "Look In" combobox, select your new alias. (If it is selected, the image will show a green "spot" in the database image.)

>From this point on, while developing, you will be automatically using this alias.

NOTE: If you work on multiple projects life could get a bit confusing because each database you open will stay open. If you quit dBASE, and come back to it later, it will re-open your databases. Before you quit, you may want to issue a:


command. This will shut down all BDE Aliases, so that the next time you load dBASE they will not automatically be opened. On the other hand, you may prefer them to be opened. It's up to you ...

A couple of suggestions:

When deploying an application that uses an alias, there is a bit more work -- for assistance in this, see the HOW TO on deploying a Visual dBASE 7.x application, which covers some of this. There is also a HOW TO in the INNO folder on the dB2K CD that discusses deploying a dB2K application with Inno Setup.

Back to the menu

Language Drivers

One of the things that developers in the past have not always been very aware of (particularly developers in the US), are the language drivers used to handle sorting and displaying of characters in their tables.

dBASE for DOS applications used (for the most part) ASCII as the basis for the language drivers, and there were a set of drivers for various actual languages, which included character set mappings for specific languages, as defined by DOS. (Indeed, many developers kept (or may still keep) an ASCII chart handy ...)

Many people have gotten confused when they got into Visual dBASE or dB2K, because if they tried to display a value using CHR( n ) (where 'n' is some number in the ASCII table), the character sometimes does not display what they expect, especially if the character has a numeric value of 128 or greater.

This is because Windows uses ANSI as the standard character set, and therefore most Windows fonts (at least most of the TrueType fonts) use a different character mapping than ASCII.

In the DOS world, setting up the language drivers was usually done on installation (although there is a setup utility that let you change this). In the Windows world, it is done via the BDE (Borland Database Engine). In either case, the language driver name used when creating the table is stored in the table's header.

If you set the BDE to one language driver, and use a table that was designed using a different language driver, the default settings in dB2K will tell you that there is a "Language Driver Mismatch". Is this serious?

Well, no. Not really. You can tell dBASE to not tell you about it (Properties menu, Desktop Properties, Country Tab, uncheck "Alert on mismatch" checkbox, click "OK", alternatively at the command window, type: SET LDCHECK OFF -- make sure that your application's .INI has the appropriate setting), and when using a table with a different language driver than your (current) default (as set in the BDE) will not be any trouble -- the language driver for that table will be used, as long as it can be found.

The real problem is that if you want to use some of the upper characters in the character set, they are mapped very different from the DOS world, and you may not be getting what you thought you would be ...

I seriously recommend that you set up the BDE to use ANSI characters. Very few of the True Type fonts in Windows (of any flavor) use the old ASCII character map (MicroSoft has a Line Draw font that I think is installed with Word, but I don't guarantee anything, that does this ...).

In some versions of the BDE, there are problems with the US ANSI driver. I have never had a problem using the "W Europe ANSI" language driver. It always works for me, I have never had problems sorting data, finding data, etc. I even use a lot of the upper ANSI characters (ones with diacriticals -- umlauts, accents, accent graves, etc.).

As a side note, for whatever reason, when the BDE is installed with dB2K, the default table driver is set to "Paradox", and the default language driver to "Paradox ANSI". If you bring up the BDE Administrator, you can change this:

The next time you load dB2K, you will be using the W Europe ANSI driver, and any new tables created will automatically use that driver as part of their definition.

(NOTE: if you are developing for European applications, you will probably wish to use language specific drivers ... you are likely to be more aware of which you need to use than I am -- change the instructions above to match the driver you need, but by all means, use the ANSI driver, not the ASCII ...)

Whether or not you set the BDE to ANSI drivers, you may find some difficulties when working in Windows because some characters may get changed ... if you set the BDE to use an ASCII driver, then this becomes very difficult. The source editor will translate characters that you may have displayed properly in a form (when you edit the form in the source editor) out to the wrong version. There is a fix you should add to your VDB.INI file that will solve any issues here:


This setting will ensure that the source editor does not change those characters for you ... it will always assume ANSI, which is what Windows uses. NOTE: if you already have a "CommandSettings" section in the .INI, just add the "LDRIVER" line)

How do you CHANGE the driver of a table? That's a lot more difficult. I had a routine for Visual dBASE 5.x that would do this using low-level file functions, but the location of the language driver name has been moved in the header of the dB2K tables (as I understand it ... I'm not a low-level table header-kind of guy ...). I recommend that you create a new table with the same design (field names, types, sizes) and use the old XBase DML commands to append all rows from the original table to the new one.

One way to see what the language driver is of a specific table is to use the undocumented TableDef class (which only lets you view the table) -- it will show you the language driver the table was created with:

   t = new TableDef()
   t.tableName = "TableName"
   ? t.language

New in dB2K: the rowset object has a languageDriver property, which does the same thing ...

If you get something like "dBASE ENU cp437", the table is using an older ASCII driver ... (ENU = ENglish, US ...)

The only reason to need to change the driver for a table to ANSI is if you are using the upper ANSI characters (anything > 127). If you change the setting in the BDE, you will have access to those characters, but if you try to store, say, an o with an umlaut (ö), to the table, you may get strange results. The ASCII driver will not map the character to the same location, and so the ANSI character (which maps to 0246) will appear as a division symbol (which I don't know if there's an ANSI equivalent of ...).

In a case like this, you would want to change the driver. Otherwise, you can probably not worry about it.

Back to the menu


This document was written to cover a lot of topics. While it attempts to do that, it cannot possibly cover them in as much depth as some people might wish. In addition, there may be other areas a developer might want to see more information on, but they may fall into the same categories as those covered here -- not really needing their own document. If you have specific questions please post them in the dBASE newsgroups provided by dBASE, Inc. for that purpose. Most questions are answered within a few hours of being posted.

If you have suggestions to make this better, to add new topics, etc., please drop a note in the dBASE newsgroups (Knowledgebase).

Special thanks to Gary White for helping out, both with readability and with some code samples. He did a LOT of experimentation for this particular document, and it wouldn't be in anywhere close to as good a shape as it is without his help.

Also thanks to Bob Rimmington who gave me some of the ideas for this document ... and helped make sure it made sense as well.

DISCLAIMER: the author is an employee of dBASE, Inc., but has written this on his own time. If you have questions regarding this .HOW document, or about dB2K you can communicate directly with the author and dBVIPS in the appropriate newsgroups on the internet.

.HOW files are created as a free service by members of dBVIPS and dBASE, Inc. employees to help users learn to use dB2K more effectively. They are edited by both dBVIPS members and dBASE, Inc. Technical Support (to ensure quality). This .HOW file MAY NOT BE POSTED ELSEWHERE without the explicit permission of the author, who retains all rights to the document.

Copyright 2001, Kenneth J. Mayer. All rights reserved.

Information about dBASE, Inc. can be found at:

EoHT: MISCCODE.HTM -- January 31, 2001 -- KJM