- Contents
Interaction Designer Help
Database Practices
Use views and stored procedures established for date time conditions
Stored SQL procedures and views execute much faster than queries especially in the case of date time conditions
Use the Logging Custom Passthrough tool for all database writes
Using the regular DB tools to write data to tables can fail if the connection to the database is down. By using the Logging Custom Passthrough tool you can take advantage of the built in support for MSMQ in CIC and assure that your writes will be processed when the connection is restored. The only limitation is that you must create your own table in the CIC database. You can’t use the LCP tool to write to other databases.
Are all database connections closed when a handler is stopped? Should the connection be released/closed?
All handlers should contain cleanup code that will be called when the handler exits. Every handler that creates a DB connection will register a cleanup code to release it if it is not explicitly released in the handler.
In earlier releases of CIC, it was possible to cause the reference count on the DB to get improperly set when the connection handle was passed to a subroutine. This caused it to not be automatically cleaned up if the handler were to exit unexpectedly. Even though this problem no longer exists, we still feel people should explicitly release the connection themselves in the handlers, just to be extra safe and to promote good programming habits.
Is the number of database connections unlimited ?
In theory, yes. The default of 64 database connections is only a default. This can be increased by setting a command-line parameter to IPDBServer (when running as a service, you set this in the registry under the CIC process tree entry for IPDBServer). However, 64 connections is a huge number of connections. If you are finding that you need more, you probably have some long-running transactions somewhere that need to be optimized. Although you can raise the max connection limit, keep in mind that in reality there really isn't such a thing as a truly 'unlimited' number of connections. DB connections take a lot of resources - both on the client and on the server. So you are going to hit a practical limit at some point. What exactly that limit is depends on the machines involved and how much degradation you are willing to live with, but somewhere around 100 connections would be a good point to start being concerned.
Is it true that DB's must be opened and closed within a handler so connections are not passed through as subroutine parameters? Instead, should new connections be made in every handler?
Again, this isn't a requirement - we do support passing connection handles to subroutines. However, doing so usually violates the principle of not holding onto valuable resources (in this case, DB connections) any longer than you absolutely must. If the operations performed in the subroutine are logically part of the same transaction as the caller's, then you should question why they are encapsulated in the subroutine. If they are an independent transaction, then they should get and release their own connections. Remember, with connection caching getting a connection (after the first initial connection) is a very fast operation. Connection caching does change the way you do things, but it is worth it. Besides the obvious benefit of virtually eliminating the overhead of constant connecting/disconnecting, it also promotes good transactional programming style. In a very simplified format, here is one effective approach within CIC:
-
Identify all of the individual database transactions needed. A transaction for this discussion means a series of database operations that must either all be completed (committed) successfully, or all be completely undone (rolled-back). In other words, the series of operations must appear to be one single operation that either succeeds of fails (i.e. no partial succeeding). Implicit in this definition is the requirement that the operations in the transaction must be intricately linked together in order to preserve database consistency. In other words, if they are not intricately linked together, then they should be separated into their own transactions.
Note that a transaction of just one operation is legitimate; in fact, this is the most common type of transaction. From a theoretical standpoint, whether a transaction is composed of one operation or several is irrelevant. However, from a programming standpoint it is. Most RDBMS guarantee that an individual operation, like an INSERT statement, will behave atomically (the behavior when triggers are introduced does vary, though). But if you want this atomic behavior for multiple operations, you must inform the RDBMS of this. In CIC, you would issue a BEGIN TRANSACTION command (via DB SQL Exec), followed by either a COMMIT TRANSACTION or ROLLBACK TRANSACTION when you are done.
People will sometimes group unrelated or semi-related operations together under one transaction when they really belong in separate transactions. This is often done out of convenience, or out of an attempt at optimization. Try hard not to do this. When you are identifying transactions, the question you need to ask yourself is "will my database really be in an inconsistent state if I split these operations apart and one of them succeeds and one fails?" If the answer to that question is no, then split them into separate transactions. -
Every transaction - whether composed of a single operation or multiple operations - should get & release its own connection. The amount of time the connection is held should be minimized as much as possible. I.e., get it just prior to using it, and release it immediately afterwards.
-
Implement each transaction in its own separate subroutine. The subroutine should have no other purpose other than to execute the database transaction, returning any result data back to the caller. It should not be doing any other significant processing. While this may at first seem hard to accomplish, cases where you legitimately can't do this are exceedingly rare.
A common situation where people don't think they can do this is when they need to fetch multiple rows of data and perform some work on each row. However, in almost all cases, fetching everything into a list first will solve this.
One advantage to putting all your database code in separate, usually small, dedicated subroutines is that if your database schema changes you will only need to make any corresponding handler changes in a small set of well-isolated subroutines. Even better, if you decide to implement some transactions in stored procedures, the changes would only involve those subroutines. Also, one of the biggest benefits of using dedicated subroutines is that it helps enforce the habit of identifying and encapsulating independent transactions.
Does all this opening and closing take additional resources?
No, since we are not actually opening & closing connections, but instead getting & releasing connections to/from the cache.
Related Topics
General Handler Authoring Practices
Limiting the Impact on Interaction Processor