Programming with SQL Relay using the Perl API

Establishing a Session

To use SQL Relay, you have to identify the connection that you intend to use.

use Firstworks::SQLRConnection;
use Firstworks::SQLRCursor;

$con=Firstworks::SQLRConnection->new("host",9000,"","user","password",0,1);

        ... execute some queries ...

After calling the constructor, a session is established when the first query, ping() or identify() is run.

For the duration of the session, the client stays connected to a database connection daemon. While one client is connected, no other client can connect. Care should be taken to minimize the length of a session.

If you're using a transactional database, ending a session has a catch. Database connection daemons can be configured to send either a commit or rollback at the end of a session if DML queries were executed during the session with no commit or rollback. Program accordingly.

Executing Queries

Call sendQuery() or sendFileQuery() to run a query.

use Firstworks::SQLRConnection;
use Firstworks::SQLRCursor;

$con=Firstworks::SQLRConnection->new("host",9000,"","user","password",0,1);
$cur=Firstworks::SQLRCursor->new($con);

$cur->sendQuery("select * from my_table");

        ... do some stuff that takes a short time ...

$cur->sendFileQuery("/usr/local/myprogram/sql","myquery.sql");
$con->endSession();

        ... do some stuff that takes a long time ...

$cur->sendQuery("select * from my_other_table");
$con->endSession();

        ... process the result set ...

Note the call to endSession() after the call to sendFileQuery(). Since the program does some stuff that takes a long time between that query and the next, ending the session there allows another client an opportunity to use that database connection while your client is busy. The next call to sendQuery() establishes another session. Since the program does some stuff that takes a short time between the first two queries, it's OK to leave the session open between them.

Commits and Rollbacks

If you need to execute a commit or rollback, you should use the commit() and rollback() methods of the Firstworks::SQLRConnection class rather than sending a "commit" or "rollback" query. There are two reasons for this. First, it's much more efficient to call the methods. Second, if you're writing code that can run on transactional or non-transactional databases, some non-transactional databases will throw errors if they receive a "commit" or "rollback" query, but by calling the commit() and rollback() methods you instruct the database connection daemon to call the commit and rollback API methods for that database rather than issuing them as queries. If the API's have no commit or rollback methods, the calls do nothing and the database throws no error. This is especially important when using SQL Relay with ODBC.

You can also turn Autocommit on or off with the autoCommitOn() and autoCommitOff() methods of the Firstworks::SQLRConnection class. When Autocommit is on, the database performs a commit after each successful DML or DDL query. When Autocommit is off, the database commits when the client instructs it to, or (by default) when a client disconnects. For databases that don't support Autocommit, autoCommitOn() and autoCommitOff() have no effect.

Catching Errors

If your call to sendQuery() or sendFileQuery() returns a 0, the query failed. You can find out why by calling errorMessage().

use Firstworks::SQLRConnection;
use Firstworks::SQLRCursor;

$con=Firstworks::SQLRConnection->new("host",9000,"","user","password",0,1);
$cur=Firstworks::SQLRCursor->new($con);

if (!$cur->sendQuery("select * from my_nonexistant_table")) {
        printf("%s\n",$cur->errorMessage());
}
Substitution and Bind Variables

Programs rarely execute fixed queries. More often than not, some part of the query is dynamically generated. It's convenient to store queries in files so they can be changed by a non-Perl programmer. The SQL Relay API provides functions for making substitutions and binds in those queries.

For a detailed discussion of substitutions and binds, see this document.

Rather than just calling sendFileQuery() you call prepareFileQuery(), substitution(), inputBind() and executeQuery().

/usr/local/myprogram/sql/myquery.sql: select * from mytable $(whereclause) program code:
use Firstworks::SQLRConnection;
use Firstworks::SQLRCursor;

$con=Firstworks::SQLRConnection->new("host",9000,"","user","password",0,1);
$cur=Firstworks::SQLRCursor->new($con);

$cur->prepareFileQuery("/usr/local/myprogram/sql","myquery.sql");
$cur->substitution("whereclause","where col1=:value1");
$cur->inputBind("value1","true");
$cur->executeQuery();

        ... process the result set ...

If you're using a database with an embedded procedural language, you may want to retrieve data from a call to one of it's functions. To facilitate this, SQL Relay provides the defineOutputBind() and getOutputBind() methods.

use Firstworks::SQLRConnection;
use Firstworks::SQLRCursor;

$con=Firstworks::SQLRConnection->new("host",9000,"","user","password",0,1);
$cur=Firstworks::SQLRCursor->new($con);

$cur->prepareQuery("begin  :result:=addTwoNumbers(:num1,:num2);  end;");
$cur->inputBind("num1",10);
$cur->inputBind("num2",20);
$cur->defineOutputBind("result",100);
$cur->executeQuery();
$result=atoi($cur->getOutputBind("result"));
$con->endSession();

        ... do something with the result ...

The getOutputBind() function returns a NULL value as an empty string. If you would it to come back as undefined instead, you can call the getNullsAsUndefined() method. To revert to the default behavior, you can call getNullsAsEmptyStrings().

Sometimes it's convenient to bind a bunch of variables that may or may not actually be in the query. For example, if you are building a web based application, it may be easy to just bind all the form variables/values from the previous page, even though some of them don't appear in the query. Databases usually generate errors in this case. Calling validateBinds() just prior to calling executeQuery() causes the API to check the query for each bind variable before actually binding it, preventing those kinds of errors. There is a performance cost associated with calling validateBinds().

Re-Binding and Re-Execution

Another feature of the prepare/bind/execute paradigm is the ability to prepare, bind and execute a query once, then re-bind and re-execute the query over and over without re-preparing it. If your backend database natively supports this paradigm, you can reap a substantial performance improvement.

use Firstworks::SQLRConnection;
use Firstworks::SQLRCursor;

$con=Firstworks::SQLRConnection->new("host",9000,"","user","password",0,1);
$cur=Firstworks::SQLRCursor->new($con);

$cur->prepareQuery("select * from mytable where mycolumn>:value");
$cur->inputBind("value",1);
$cur->executeQuery();

... process the result set ...

$cur->clearBinds();
$cur->inputBind("value",5);
$cur->executeQuery();

... process the result set ...

$cur->clearBinds();
$cur->inputBind("value",10);
$cur->executeQuery();

... process the result set ...
Accessing Fields in the Result Set

The rowCount(), colCount() and getField() methods are useful for processing result sets.

use Firstworks::SQLRConnection;
use Firstworks::SQLRCursor;

$con=Firstworks::SQLRConnection->new("host",9000,"","user","password",0,1);
$cur=Firstworks::SQLRCursor->new($con);

$cur->sendQuery("select * from my_table");
$con->endSession();

for ($row=0; $row<$cur->rowCount(); $row++) {
        for ($col=0; $col<$cur->colCount(); $col++) {
                printf("%s,",$cur->getField($row,$col));
        }
        printf("\n");
}

You can also use getRow() or getRowHash() to get the entire row.

use Firstworks::SQLRConnection;
use Firstworks::SQLRCursor;

$con=Firstworks::SQLRConnection->new("host",9000,"","user","password",0,1);
$cur=Firstworks::SQLRCursor->new($con);

$cur->sendQuery("select * from my_table");
$con->endSession();

for ($row=0; $row<$cur->rowCount(); $row++) {
        @rowarray=$cur->getRow($row);
        for ($col=0; $col<$cur->colCount(); $col++) {
                print("%s,",$rowarray[$col]);
        }
        printf("\n");
}

The getField(), getRow() and getRowHash() methods return NULL fields as empty strings. If you would like them to come back as NULL's instead, you can call the getNullsAsNulls() method. To revert to the default behavior, you can call getNullsAsEmptyStrings().

If you want to access the result set, but don't care about the column information (column names, types or sizes) and don't mind getting fields by their numeric index instead of by name, you can call the dontGetColumnInfo() method prior to executing your query. This can result in a performance improvement, especially when many queries with small result sets are executed in rapid succession. You can call getColumnInfo() again later to turn off this feature.

Dealing With Large Result Sets

SQL Relay normally buffers the entire result set. This can speed things up at the cost of memory. With large enough result sets, it makes sense to buffer the result set in chunks instead of all at once.

Use setResultSetBufferSize() to set the number of rows to buffer at a time. Calls to getRow(), getRowHash() and getField() cause the chunk containing the requested field to be fetched. Rows in that chunk are accessible but rows before it are not.

For example, if you setResultSetBufferSize(5) and execute a query that returns 20 rows, rows 0-4 are available at once, then rows 5-9, then 10-14, then 15-19. When rows 5-9 are available, getField(0,0) will return NULL and getField(11,0) will cause rows 10-14 to be fetched and return the requested value.

When buffering the result set in chunks, don't end the session until after you're done with the result set.

If you call setResultSetBufferSize() and forget what you set it to, you can always call getResultSetBufferSize().

When buffering a result set in chunks, the rowCount() method returns the number of rows returned so far. The firstRowIndex() method returns the index of the first row of the currently buffered chunk.

use Firstworks::SQLRConnection;
use Firstworks::SQLRCursor;

$con=Firstworks::SQLRConnection->new("host",9000,"","user","password",0,1);
$cur=Firstworks::SQLRCursor->new($con);

$cur->setResultSetBufferSize(5);

$cur->sendQuery("select * from my_table");

$done=0;
$row=0;
while (!$done) {
        for ($col=0; $col<$cur->colCount(); $col++) {
                if ($field=getField($row,$col)) {
                        print("%s,",$field);
                } else {
                        $done=1;
                }
        }
        printf("\n");
        row++;
}

$cur->sendQuery("select * from my_other_table");

        ... process this query's result set in chunks also ...

$cur->setResultSetBufferSize(0);

$cur->sendQuery("select * from my_third_table");

        ... process this query's result set all at once ...

$con->endSession();
Cursors

Cursors make it possible to execute queries while processing the result set of another query. You can select rows from a table in one query, then iterate through it's result set, inserting rows into another table, using only 1 database connection for both operations.

For example:

use Firstworks::SQLRConnection;
use Firstworks::SQLRCursor;

$con=Firstworks::SQLRConnection->new("host",9000,"","user","password",0,1);
$cursor1=Firstworks::SQLRCursor->new($con);
$cursor2=Firstworks::SQLRCursor->new($con);

$cursor1->setResultSetBufferSize(10);
$cursor1->sendQuery("select * from my_huge_table");

$index=0;
while (!$cursor1->endOfResultSet()) {
        $cursor2->prepareQuery("insert into my_other_table values (:1,:2,:3)");
        $cursor2->inputBind("1",$cursor1->getField($index,1));
        $cursor2->inputBind("2",$cursor1->getField($index,2));
        $cursor2->inputBind("3",$cursor1->getField($index,3));
        $cursor2->executeQuery();
}

Prior to SQL Relay version 0.25, you would have had to buffer the first result set or use 2 database connections instead of just 1.

The number of cursors simultaneously available per-connection is set at compile time and defaults to 5.

Getting Column Information

The name, type, length and length of the longest field of each column are available.

use Firstworks::SQLRConnection;
use Firstworks::SQLRCursor;

$con=Firstworks::SQLRConnection->new("host",9000,"","user","password",0,1);
$cur=Firstworks::SQLRCursor->new($con);

$cur->sendQuery("select * from my_table");
$con->endSession();

for ($i=0; $i<$cur->colCount(); $i++) {
        printf("Name:          %s\n",$cur->getColumnName($i));
        printf("Type:          %s\n",$cur->getColumnType($i));
        printf("Length:        %s\n",$cur->getColumnLength($i));
        printf("Longest Field: %s\n",$cur->getLongest($i));
        printf("\n");
}
Caching The Result Set

Say you're writing a web-based report where a query with a huge result set is executed and 20 rows are displayed per page. Rather than rerunning the query for every page every time and dumping all but the 20 rows you want to display, you can run the query once, cache the result set to a local file and just open the file for each page of the report.

First CGI:

use Firstworks::SQLRConnection;
use Firstworks::SQLRCursor;

$con=Firstworks::SQLRConnection->new("host",9000,"","user","password",0,1);
$cur=Firstworks::SQLRCursor->new($con);

	... generate a unique file name ...

$cur->cacheToFile(filename);
$cur->setCacheTtl(600);
$cur->sendQuery("select * from my_table");
$con->endSession();
$cur->cacheOff();

        ... pass the filename to the next page ...

Second CGI:

use Firstworks::SQLRConnection;
use Firstworks::SQLRCursor;

        ... get the filename from the previous page ...

        ... get the page to display from the previous page ...

$con=Firstworks::SQLRConnection->new("host",9000,"","user","password",0,1);
$cur=Firstworks::SQLRCursor->new($con);

$cur->openCachedResultSet(filename);
$con->endSession();

for ($row=pagetodisplay*20; $row<(pagetodisplay+1)*20; $row++) {
        for ($col=0; $col<$cur->colCount(); $col++) {
                printf("%s,",$cur->getField($row,$col));
        }
        printf("\n");
}
Suspending and Resuming Sessions

Sometimes web-based applications need a single database transaction to span multiple pages. Since SQL Relay sessions can be suspended and resumed, this is possible.

First CGI:

use Firstworks::SQLRConnection;
use Firstworks::SQLRCursor;

$con=Firstworks::SQLRConnection->new("host",9000,"","user","password",0,1);
$cur=Firstworks::SQLRCursor->new($con);

$cur->sendQuery("insert into my_table values (1,2,3)");
$port=$cur->getConnectionPort();
$curocket=$cur->getConnectionPort();
$rs=$cur->getResultSetId();
$cur->suspendResultSet();
$con->suspendSession();

        ... pass the rs, port and socket to the next page ...

Second CGI:

use Firstworks::SQLRConnection;
use Firstworks::SQLRCursor;

        ... get rs, port and socket from previous page ...

$con=Firstworks::SQLRConnection->new("host",9000,"","user","password",0,1);
$cur=Firstworks::SQLRCursor->new($con);

$con->resumeSession(port,$socket);
$cur->resumeResultSet($rs);
$cur->sendQuery("commit");
$con->endSession();

You can also distribute the processing of a result set across a series of CGI's using suspended sessions. If you're buffering a result set in chunks instead of all at once and suspend a session, when you resume the session you can continue to retrieve rows from the result set.

Similarly, if you're buffering a result set in chunks, caching that result set and suspend your session. When you resume the session, you can continue caching the result set. You must use resumeCachedResultSet() instead of resumeResultSet() however.