Hacker’s Guide to Visual FoxPro
An irreverent look at how Visual FoxPro really works. Tells you the inside scoop on every command, function, property, event and method of Visual FoxPro.
If we do not lay out ourselves in the service of mankind whom should we serve?
—Abigail Adams, 1778
Client-Server: definitely one of the buzzwords of the nineties. Or is that two of the buzzwords—we haven’t figured out yet whether “client-server” is one word or two. We’re not alone in this. A salesman for a major vendor asked to define client-server a few years ago responded, “Client-server? Why, that’s whatever it is I’m selling today.”
So why all the fuss about client-server? Because most organizations have their data spread out over multiple machines in at least as many formats. Client-server is advertised as the ticket to using all that data without forcing everyone to use the same applications or having to convert everything to a common format.
Of course, now that we’re in the zeros (or is that the aughts?), client-server should be passé, right? Well, although it certainly gets less attention than it did when we wrote earlier editions of this book, client-server still has a lot of life left in it.
We noticed a trend several years ago while surfing the Microsoft Web site. Where previously there had been little or no talk about database issues, now we were starting to see some action. WebDb, DAO and RDO were the first, then OLE DB, ADO and UDA. Acronymomania! From little attention, Microsoft turned its focus from the Office suite and network wars first to the Internet and then to the enterprise. Like drinking from a firehose, suddenly we were drowning in new terminology, 1.0 products and “preview” betaware. With Visual Studio 6, we started to see the first stable products of this craze. Now, these tools have been used successfully for several years, so we’ll take a deeper look at all those acronyms and a few more (like MTS and COM+) a little later, in “n-Tiers of a Clown,” below.
Let’s get back to client-server. In Microsoft’s world, client-server and database connectivity are implemented through ODBC, or “OPEN DATABASE Connectivity.” ODBC is to databases what Print Manager is to printers. Just as Print Manager and appropriate drivers let you send all sorts of documents to whatever printers you want, ODBC, together with a set of drivers, lets you use data from whatever database management system you want in an assortment of applications. Windows comes with a bunch of drivers. You can get other (and perhaps more up-to-date) drivers from the manufacturer of the DBMS you need to talk to.
Visual FoxPro provides two methods of performing client-server: views and SQL Pass-Through. Views are everyman’s entree to the client-server world. You can grab server data, manipulate it in FoxPro, and then return it to the server. SQL Pass-Through gives you more control at the cost of more responsibility. You communicate with the server through a special set of functions—you can pass commands directly to the server to be executed. (SQL Pass-Through is an updated version of what the Connectivity Kit provided in FoxPro 2.x.)
A client is to me a mere unit, a factor in a problem.
—Sir Arthur Conan Doyle, Sherlock Holmes in The Sign of Four
Let’s get some basic terminology down. Start with “client” and “server.” The server’s the one who has the data you want. It might be a mainframe on the other side of the world, Oracle on your network, or Access on your machine. The server is also called the “back end.” Server data is sometimes referred to as “remote” data.
The client is the one who wants the data—in our case, your Visual FoxPro application. The client is sometimes called the “front end.” Data that originates in the client is called “native” or “local” data.
ODBC is the translator here. It sits between the client and the server and converts the client’s requests into a form the server understands. Then it converts the server’s responses back into the client’s format.
A key ODBC concept is a “data source” (also known as a DSN)—an application and a database it owns. When you want to get your hands on server data, you tell ODBC what data source to use. For testing purposes, our favorite in years past was Microsoft Access with the ODBC drivers pointing at a doctored-up copy of Access’ sample Northwind Traders database. (The doctoring was to remove embedded spaces in the names of the Northwind tables and fields. In defiance of pretty much every other DBMS out there, Access lets you use spaces in names of things. Kind of like Visual FoxPro’s long names, which also let you embed spaces in table and view names. In both cases, while Visual FoxPro can handle it, it’s generally more trouble than it’s worth.) These days, we’re more inclined to use SQL Server or MSDE (Microsoft Database Engine, which is essentially a free, albeit limited, edition of SQL Server), pointing to the SQL Server version of the Northwind database.
To create a new ODBC data source, use the Windows ODBC Data Source Administrator, available from the Windows Control Panel (in Windows 2000 and later operating systems, choose “Administrative Tools” within the Control Panel to find the ODBC icon). You can create a user data source (available to the currently logged-in user only), a system data source (available to all users), or a file data source (the information is stored in a file rather than in the Registry with user and system data sources). The exact steps for creating a data source vary, depending on the ODBC driver used, but they generally involve selecting the server (if necessary) and “database” (for example, an actual database in the case of SQL Server, Oracle, or other DBMS, or a directory containing DBF files in the case of VFP free tables or dBase files).
You can define a data source that doesn’t point at a particular database, but then you’ll get prompted to specify one every time you open that data source.
All of this terminology can be misleading, since for some data sources, such as an Access table on your local machine, the data is neither remote, nor is there truly any process running that you can call a server. Using ODBC to connect to this data source is not truly “client-server” in its pure sense, but, thanks to the abstraction of ODBC, it will appear exactly the same to the consumer of the data, the client. This abstraction of interfaces is key to letting us prototype a client-server application using a “one-tier” ODBC data source.
‘Tis distance lends enchantment to the view.
—Thomas Campbell, Pleasures of Hope
Views are a very cool idea. They let you take data from a variety of sources and handle it all pretty much the same way. Once you get your hands on the data, you don’t have to worry about where it came from.
A view is a subset of the data in a database, organized in a way that makes sense for a particular operation. Often, data in a view is denormalized to make it easier to report on or more logical for the user who’s working with it.
Views in FoxPro can be based on remote data, native data or a combination of the two. This makes it possible to process data without regard to the original source of the data. From your application’s perspective, the data in the view is just data; it doesn’t know or care whether it started out in FoxPro, Access, SQL Server, or Joe’s Original Database.
Data in a view can be updated. It’s your choice whether the updated data gets passed back to the original data source and, if so, when that happens. By definition, all the fields in a view are updateable locally (in Visual FoxPro). It’s your responsibility not to let users make updates that don’t get passed to the server. Having their updates thrown away tends to make most users a little grouchy, so ensure that your interface makes it clear when data is updateable and when it is not.
Views are based on queries. A view is defined by specifying a query that collects the data to populate the view. Like so much else in Visual FoxPro, this can be done either by command or using the visual tools (in this case, the View Designer, known for short as the VD—we can’t believe they left this name in).
It’s much easier to CREATE VIEW
s with the VD than in code,
but you have to accept the serious limitations of working within the VD. In
Visual FoxPro 5, in a move to make VFP more ANSI-SQL-compatible by supporting
the new JOIN clause, Microsoft broke the View Designer badly. Because of the
way join clauses form intermediate results between tables, multiple joins to
the same tables (such as a parent with two sibling children) often cannot be
expressed properly by the Designer. It complains while saving a view that it
will not be editable again, or gripes that columns cannot be found.
Fortunately, we have the coded method to fall back on. When you CREATE VIEW
s in
code (using CREATE SQL VIEW
), you generally need a long series of calls to
DBSetProp()
to establish update criteria. One solution is a hybrid approach. Get
as far as you can with the VD, coding your simple views, then use the cool
utility GENDBC (which ships with FoxPro) to get the code to create the view.
Make further modifications to the code, run it, and you’ve got your view. Our
preferred approach, though, is to use one of the freeware utilities available
for this task: eView from Erik Moore or ViewEdit from Steve Sawyer. See the
Resources section at the back of the book for information on these tools.
You can also create parameterized views. With these, one or
more variables appear in the query’s WHERE
clause preceded by “?”.
When you open the view, the value of those variables is substituted into the
query. No big deal, right? Except that, if the variable doesn’t exist, the user
is prompted for a value. Imagine a view that pulls out all customers in a
single country. Rather than writing separate versions for each country, or some
deviously tricky, macroized routine you’ll never figure out how to debug, you
use a parameter for the country and the user fills it in at runtime. If the
query parameter does exist, the user isn’t prompted, so if you prefer to create
your own dialog for the user, or can derive the parameterized value from the
environment, you can create the parameter and refresh the view yourself. We
often find ourselves coding this trick in calls to the REFRESH()
function.
For I will pass through the land of Egypt this night …
—Exodus
SQL Pass-Through (or SPT) is for those who need to be in control. Instead of simply defining a view once and opening it as needed, with SPT you send the necessary commands to the server each time you want to see some data.
SPT works a lot like FoxPro’s low-level file functions. You open a connection, which gives you a handle. After that, you refer to the connection by the handle and call on appropriate functions to communicate with the server. When you’re done, you close the connection.
By providing more or less direct access to the server, SPT also lets you handle administrative functions on the server end of things. You can even perform tasks like creating tables on the server. Definitely, this mode is for those who know what they’re doing.
This makes SPT sound quite difficult and challenging. Perhaps the best use of SPT is to call stored procedures written on the server, and have them return cursors with the information (even if that cursor is a one-record table containing only one field). The stored procedures are written in the native language of the server; for example, SQL Server uses T-SQL. You can debug the stored procedure in its native environment (which may be more, less, or differently robust than VFP, but it likely checks the syntax properly and accesses the appropriate Help files). For small development shops, this may mean that you get to learn all about the server’s native language (which isn’t a bad idea if you’re working with that server), but for larger shops, it may mean that you can ask the DBA (who is hopefully familiar with the language) to write a stored procedure to do what you need. This gives you the best of both worlds: You don’t have to debug a complex SQL Server or Oracle statement that lives as a text string in VFP, and you might be able to get someone else to help optimize, write, or maintain the code.
Which commands you can actually pass depends on the server. Different servers have different capabilities. You’ll need to learn the capabilities and liabilities of the servers you want to work with.
In historic events, the so-called great men are labels giving names to events, and like labels they have but the smallest connection with the event itself.
—Leo Tolstoy, War and Peace
To access remote data, you need a data source. You also have various options regarding the way Visual FoxPro interacts with that data source. Since you’re likely to want to do it the same way over and over again, FoxPro lets you create what it calls a connection.
A connection is simply the name of a data source, along with
various properties; the connection is stored and assigned a name. When you
create a view or open a communication channel with SPT, one of your options is
to use a named connection. Without it, you have to specify the data source and
all the appropriate properties. Individual views can have their connection
specified in the CREATE VIEW
command or within the VD interface (look under
Advanced Options on the Query menu pad). Sharing a connection is the preferred
option in a production environment, because connections consume valuable
resources on the server. In addition, some servers are licensed by connection,
rather than by user, and an errant application can consume many more
connections than necessary, limiting the number of users.
The Connection Designer lets you specify connections more or
less visually. You can CREATE CONNECTION
s in code with (what else?) the CREATE
CONNECTION command. Connect and disconnect with (brace yourself) SQLConnect()
and SQLDisconnect()
.
Developers raise a legitimate concern that connection
definitions are stored within the DBC and could include user IDs and passwords
to access back-end databases. We agree. There are good solutions out there as
well. First, don’t store the password with the connection. Either ask the
operator for it at runtime, or store it in an encrypted field. Starting in VFP
7, it’s easier to do this with remote views because the USE
command has a
CONNSTRING
clause that overrides the connection information the view was
defined with. This allows you to specify things like the user name and password
at runtime rather than design-time.
The chief merit of language is clearness, and we know that nothing detracts so much from this as do unfamiliar terms.
—Galen, On the Natural Faculties
There are a number of items you can set to control access to remote data. Some of them apply only with SPT, while others affect views as well. Naturally, all of them come with long-winded terminology. Most of these settings apply to a connection, but some are properties of individual views or cursors.
A word of caution here. Several of the authors have run into situations where a developer has tweaked so many of these options that it’s impossible to figure out what’s causing performance or operational problems. As a general rule, the defaults are the right settings. Change these with care, and only one at a time, testing before and after the change to ensure that you’re seeing the effect you desire, and aren’t breaking other parts of the application with your change.
The first choice you have (for SPT or named connections only) is whether to use synchronous or asynchronous processing. In English, this means whether you have to wait for the command to finish or you get control back right away.
Synchronous processing is what we’re all pretty much used to. You issue a command and when it’s done, you go on. With asynchronous processing, you have to keep asking the server if it’s done. So why would you want to do this? If the command will take a long time, asynchronous processing lets you update the display (or even do other stuff) while you wait. It’s like taking a book along to the doctor’s office.
With asynchronous processing, you keep issuing the same command over and over until it returns a non-zero value. The bonus is what else you put in the loop that issues the command.
The next option is relevant only when you’re dealing with a server that lets you send multiple commands at once. (SQL Server does this; Access doesn’t.) When you do that, you have a choice of waiting for the whole group of commands to finish (batch processing) or starting things off with the first command and then coming back to ask for results of the others (non-batch). This one also applies only to SPT and named connections.
As with asynchronous processing, non-batch lets you do
things in between and keep the user updated on progress. In this case, you
start the commands and then use SQLMoreResults()
each time you want results
from a command after the first.
Although synchronous/asynchronous and batch/non-batch sound just about the same, they’re really not. Synchronous/asynchronous relate to a single command; batch/non-batch control a group of commands.
In fact, you can mix and match the two. Any of the four combinations of these two settings is permitted. Here are the choices and what they do:
SQLExec()
) until it returns something other than zero.
At this point, there’s a cursor for each result set generated.SQLMoreResults()
for each subsequent result set.SQLMoreResults()
repeatedly for each additional result set. Each
time SQLMoreResults()
returns 1, you know you’ve got another result set.
When it returns 2, you know you’re done.Progressive fetching—sounds like something a new-age kind of
dog would do. Actually, it means you get control back while the server keeps on
sending the rest of the records. This one’s easy to test. Just create a view
with a few thousand records in it (say, based on the OrderDetails table in
Northwinds). When you USE
the view from the Command Window, watch what happens
in the status bar. You get control back with only 100 records in the cursor,
and you can go right ahead and Browse that cursor right away. But if you keep
watching the status bar, you’ll see the record count going up and up until it’s
reached the actual table size.
You can set the number of records that get fetched at a time. You also control whether data in memo and general fields is retrieved up front, or not until you need it. Waiting until you need it speeds up the initial query at the cost of a slight delay when you want to edit a memo or general field.
Another setting lets you control the maximum number of records returned by the server. You get back the first however many records the server finds, and the rest are ignored.
The complete list of properties for remote data access is
covered under the DBSetProp()
, CURSORSETPROP()
and SQLSetProp()
functions.
Visual FoxPro’s built-in transactions let you protect your data when storing it. For example, you can wrap storage of an invoice and its detail records in a transaction. If storage of any one record fails, the whole transaction can be rolled back, preventing partial data from being stored.
FoxPro’s transactions don’t affect storage of remote data.
If the server you’re dealing with supports transactions natively, use
SQLSetProp()
to set Transactions to Manual; without that setting, each
individual SQL statement is wrapped in an individual transaction. It’s far more
likely that you’ll want to ensure that whole batches of updates happen
together. You can control the remote transactions with the SQLCommit()
and
SQLROLLBACK()
functions. Remember, though, if you’ve assumed the responsibility
of controlling transactions, you’d better remember to finish them! An
uncommitted transaction is automatically rolled back if it’s not completed
before the connection is lost. Also, transactions typically place a heavy
burden on servers, forcing them to hold locks on a number of records and
slowing processes for other applications accessing the server. Get in there,
start your transaction, make your data changes and get out as quickly as
possible, in order to get optimum performance from your server.
SQL Pass-Through functions handle errors somewhat
differently than most FoxPro commands. In fact, error handling for SPT
functions is similar to that for the low-level file functions they resemble.
Unlike the LLFFs, though, the SPT functions don’t have their own dedicated
error function—instead, they cooperate with the AERROR()
function.
Rather than triggering a FoxPro error (or the specified error handler), the SPT functions return a negative number when they fail. According to Help, they return -1 when there’s a “connection-level” error and -2 when there’s an “environment-level” error. We haven’t been able to generate an environment-level error, but maybe we just haven’t tried the right destructive things.
We’ve also run into a few situations where an SPT function
does go to the FoxPro error handler rather than returning a negative result. Passing
bogus SQL sometimes generates error 1526 (a connectivity error). Check out a
few of the errors in the 1520’s and 1530’s for items you’ll need to beware of.
SQLDisconnect()
, in particular, yells for help when you try to shut down a
nonexistent connection or a connection that’s in use.
It looks like you usually get FoxPro’s error handler when FoxPro itself can detect the error, and you get a negative result when ODBC or the server finds the error. We sure wish it were uniform. This hybrid mix makes writing an error handler much harder. Your overall error handler has to be prepared to deal with some client-server errors, but any SPT code has to check each return value and call on an error handler if a negative value pops up.
In spite of this weakness, we’re very satisfied with Visual FoxPro’s client-server capabilities. We see a lot of client-server work out there and are really glad that Visual FoxPro can now jump right into the fray.
If you talk to a cutting-edge developer about client-server work, you’re likely to get the reaction “Bah! Client-server? Way too limited! We’re into n-tier applications now.” It sounds way cool, but what does this mean?
The tiers in application development refer to the different independent processes performing data manipulations in an application. A one-tier application, like classic FoxPro applications using DBFs, Access applications with MDB files, or even Word or Excel, involves a client application that reads and writes to a file. While there might be a “server” somewhere out on the network providing file services, there is no other intelligent process intervening between the client and the data. Two-tier applications, on the other hand, have a client application, one that typically interacts with the user, talking to the server application that processes the queries and allows (or disallows) updates to the data. Client-server allows centralized processing of data queries and the centralization of all logic that is needed to determine whether an update should be applied to the database.
What more could we possibly ask for? A two-tier application sounds like it has solved all problems. Client applications perform the locally intensive work of presenting the user interface, while a “big-iron” server can do heavy-duty queries with—what is to the server—local data. In addition, the server can provide a “firewall” of protection to make sure that bad data doesn’t get into the system, maintaining referential integrity and perhaps security as well.
Well, there are a few chinks in the armor. First, server products typically each have their own proprietary languages, usually extensions to SQL, and well, frankly, they stink. While they’re good at protecting their data and performing routine tasks like queries and triggers, they aren’t built for processing complex business logic (“If the customer has ordered more than $X from categories A, B and C within the last 90 days, not counting closeouts and specials, their shipping is free, unless…”), nor is there any easy way to transmit complex messages from the back end to the client. No client likes to be told “Update rejected,” but that’s about what you can get from a lot of client-server systems.
The other problem with client-server is that the problem model they were designed to solve—big, hefty servers with dumb little terminals—is outdated. While a large shop can field a symmetric multi-processing, RAID-5 server with a couple of gigabytes of RAM and 100 gig of drive space, it’s most likely being hit by Pentium III workstations with 256 MB of RAM themselves—not shabby machines. The idea that one box, even with eight processors on it, can do all of the heavy lifting for hundreds of high-powered users is a very poor use of resources.
Enter n-tier. The idea of an n-tier model is that the database server just serves the data. Its job in life is downsized from a one-stop shop to a big, dumb brute that serves data as rapidly as possible, performs queries, fires triggers and ensures relational integrity. Middle tiers come into play—tiers with an ‘s’—there might be more than one, hence we’re not talking about three tiers, but n, in the sense of 1, 2, 3, 4, … n objects between you and the data. A middle-tier object is where the business rules get placed—an object with clearly defined interfaces and a specific task to which it’s been assigned. By creating multiple objects through which the data passes, we can simplify the tasks of each object, and by using an object technology that lets us manage these objects, we can field the individual objects on whichever platform makes the most sense in terms of efficiency and distribution of processing.
So now we need a technology that can create objects with simple interfaces, manage them across an enterprise and allow us to do distributed processing. Is this possible? Ta-da! Microsoft to the rescue with COM and DCOM. Check out “It was Automation, You Know” and “Active Something” for some ideas on where these technologies are and where they are going.
As we mentioned earlier, Microsoft has gone ga-ga over data. We’re somewhat glad for the attention, but a little concerned whenever the Microsoft Marketeers get an idea. And lately, they’ve been getting a lot of them.
For a while, Microsoft pitched Universal Data Access (UDA) as the solution for all things data. The key technologies behind UDA are (get your acronym seat belt on!) ADO, OLE DB and ODBC. What are these? ADO, ActiveX Data Objects, are high-level COM components that provide basic data services within the familiar COM programming model. OLE DB is the underlying technology that ADO uses to reach data. OLE DB, in turn, uses either native OLE DB providers or ODBC to reach the data sources. VFP 7 includes an OLE DB provider for VFP, which is more scalable and provides more capabilities than using the ODBC driver (which is now in “support mode,” so it won’t be upgraded any more).
So why would we want to go to all this trouble to get to data when we can already do it quite well natively? Well, there are a number of situations where ADO has a distinct advantage over the built-in FoxPro one-tier and two-tier models. First, ADO presents a very simple model. Second, ADO recordsets are COM objects that can be passed from one object to the next with no dependence on work areas or knowledge of how to manipulate a cursor. Not only can ADO be passed between local objects, but these lightweight objects can be transferred over a variety of network protocols, including the Internet.
In addition, languages that lack a native database engine, like Visual Basic or Visual C++, have flocked to this technology, which effectively gives them a data engine to work with. If we want to play in the exciting world of components with these tools, we need to learn to speak the lingua franca as well.
Other pieces in this puzzle include MTS (for Windows NT) and COM+ (for Windows 2000 and later). Now that we have this gaggle of objects slinging ADO recordsets from place to place and attempting complex transactions, how do we coordinate all of this? Microsoft Transaction Server, MTS, was the piece Microsoft came up with to handle all this stuff. This server, running on a Windows NT Server machine, allows us to parcel out and load-balance resource dispensers of middle-tier objects and back-end connections in a manner somewhat akin to the TP monitors of the mainframe model. With the release of Windows 2000, MTS was no longer a bolt-on but became an integrated part of some new technology called COM+. MTS and COM+ are the crown jewels that link all of the ADO, DCOM, COM and OLE DB technologies into a grand whole. They’re the star players of their Digital Nervous System (DNS) and Distributed interNet Architecture (DNA). (We’re pretty nervous ourselves about the idea of a digital nervous system. Wouldn’t want trembling fingers on the red button.)
Whew! Had enough acronyms yet? Sorry, but there are more to come.
Like it did with the Internet, one day, Microsoft woke up and discovered XML, decided it was the wave of the future, and so started adding support for it into everything they build. Okay, maybe not everything; we’re still waiting for the XML version of Monster Truck Madness.
XML, which stands for eXtensible Markup Language, is to data as HTML is to presentation. It provides a simple way of working with data: The data elements are just text stored between “tags” that identify what the elements are. With some nice formatting, the average human can even read this stuff! That’s a far cry from DBF, Access MDB and SQL Server storage.
So what’s the big deal about storing data as text rather than in proprietary (or even standard) binary formats? The big deal is transportation. A cursor in VFP can’t go anywhere outside the VFP application (you can’t even pass it to another data session in the same application). While ADO has built-in mechanisms for transport over a wire (whether via DCOM to a server or HTTP over the Internet), some Internet sites have firewalls that prevent objects from getting in or out. And even if you can get the ADO recordset through, the receiver may not be running Windows (think Mac, Unix, or Linux) so it can’t do anything with the recordset it gets.
On the other hand, you can easily pass a text string pretty much anywhere. It doesn’t matter whether you pass it to another component running in the same language on the same machine, to a server running a component on the other side of your office, or to a Unix box somewhere behind a firewall on the other side of the planet—everything understands text.
For this reason, Microsoft says that XML is the preferred means for transporting data. Okay, okay, we’ve heard it all before. “No, really, this is the way you should do it. Forget about that stuff we told you to use last year that was supposed to replace the stuff we told you about the year before. Trust us!” This time, however, we think they’re on to something (as opposed to on something, which we’ve occasionally wondered about in the past). For one thing, you can’t get much simpler than text. Although you can use a fancy XMLDOM object to create and consume XML, you can also just generate and parse it using straight VFP code if you wish. For another, Microsoft (for once) isn’t alone in this. XML is becoming the foundation in almost every vendor’s plans. For example, within a few years, XML will replace EDI as the means of exchanging information between customer and supplier or business partners.
VFP 7 adds several new functions that allow it to play nice
in the XML world. See the Reference section for details on XMLTOCURSOR()
, CURSORTOXML()
,
and XMLUPDATEGRAM()
.
The pace at which new technology comes out of the offices in Redmond seems to accelerate every year. The next new thing is .NET, Microsoft’s framework for building applications. It includes yet another new data access technology, this one called ADO.NET. While it sounds like an updated version of ADO, it doesn’t bear a lot of resemblance to its predecessor. The good news is that, underlying the ADO.NET object model (which is actually pretty cool), is plain-old XML. So, transporting an ADO.NET dataset over the Internet means you’re actually just sending a bunch of text to someone, formatted so the receiver can either use it as is (if they’re not using Windows or .NET) or use it as an object-oriented set of data.
Sadly, VFP can’t use ADO.NET; it’s available only to languages based on the Common Language Runtime (CLR) of .NET, such as Visual Basic.NET and C# (although you could create a subclass of the ADO.NET classes, using a CLR language, that exposes ADO.NET to VFP or other COM clients). However, that doesn’t mean VFP can’t play in the .NET world. VFP can create and consume XML, or expose COM objects for consumption by a .NET application, so exchanging data with .NET applications won’t be a big deal.
I fear we have awoken the sleeping giant.
—Admiral Yamamoto, on hearing of the success of his Pearl Harbor attack
Microsoft has noticed data. Having all but cornered the market on office productivity software, on an uphill climb to take over network operating systems, and skirmishing to make the Internet its own, Microsoft has chosen to open yet another war front by attempting to wrest the enterprise applications from the mainframe and mini-computer crowd. To do that, Microsoft has been getting a lot better at working with data, and the efforts in DNA, ADO, XML and .NET are signs that Microsoft is making its move. Considering Microsoft’s track record, we have little doubt they’ll succeed. So we’ll keep an eye on their progress, and think you should, too.
In the meantime, we’ll continue using FoxPro to deliver the most powerful desktop applications on the planet.