- •Contents
- •Notices
- •Trademarks
- •Preface
- •The team who wrote this book
- •Now you can become a published author, too!
- •Comments welcome
- •Stay connected to IBM Redbooks
- •Chapter 1. Introduction
- •1.1 The opportunity of the in-memory database
- •1.1.1 Disk databases cannot expand to memory
- •1.1.2 IBM solidDB IMDB is memory-friendly
- •1.1.3 Misconceptions
- •1.1.4 Throughput and response times
- •1.2 Database caching with in-memory databases
- •1.2.1 Databases are growing
- •1.2.2 Database caching off-loads the enterprise server
- •1.2.3 IBM solidDB Universal Cache
- •1.3 Applications, competition, and the marketplace
- •Chapter 2. IBM solidDB details
- •2.1 Introduction
- •2.2 Server architecture
- •2.2.1 Database access methods and network drivers
- •2.2.2 Server components
- •2.3 Data storage in solidDB
- •2.3.1 Main-memory engine
- •2.4 Table types
- •2.4.1 In-memory versus disk-based tables
- •2.4.2 Persistent versus non-persistent tables
- •2.4.3 Choosing between different table types
- •2.5 Transactionality
- •2.5.1 Concurrency control and locking
- •2.5.2 Isolation levels
- •2.5.3 Durability levels
- •2.6 solidDB SQL extensions
- •2.6.1 solidDB SQL standard compliance
- •2.6.2 Stored procedures
- •2.6.3 Triggers
- •2.6.4 Sequences
- •2.6.5 Events
- •2.6.6 Replication
- •2.7 Database administration
- •2.7.1 Configuration settings
- •2.7.2 ADMIN COMMAND
- •2.7.3 Data management tools
- •2.7.4 Database object hierarchy
- •Chapter 3. IBM solidDB Universal Cache details
- •3.1 Architecture
- •3.1.1 Architecture and key components
- •3.1.2 Principles of operation
- •3.2 Deployment models
- •3.3 Configuration alternatives
- •3.3.1 Typical configuration
- •3.3.2 Multiple cache nodes
- •3.3.3 SMA for collocation of data
- •3.3.4 solidDB HSB servers for high availability
- •3.4 Key aspects of cache setup
- •3.4.1 Deciding on the replication model
- •3.4.2 Defining what to replicate
- •3.4.3 Starting replication
- •3.5 Additional functionality for cache operations
- •3.5.1 SQL pass-through
- •3.5.2 Aging
- •3.5.3 Improving performance with parallelism
- •3.6 Increasing scale of applications
- •3.6.1 Scaling strategies
- •3.6.2 Examples of cache database applications
- •3.7 Enterprise infrastructure effects of the solidDB Universal Cache
- •3.7.1 Network latency and traffic
- •3.7.3 Database operation execution
- •Chapter 4. Deploying solidDB and Universal Cache
- •4.1 Change and consideration
- •4.2 How to develop applications that use solidDB
- •4.2.1 Application program structure
- •4.2.2 ODBC
- •4.2.3 JDBC
- •4.2.4 Stored procedures
- •4.2.5 Special considerations
- •4.3 New application development on solidDB UC
- •4.3.1 Awareness of separate database connections
- •4.3.2 Combining data from separate databases in a transaction
- •4.3.3 Combining data from different databases in a query
- •4.3.4 Transactionality with Universal Cache
- •4.3.5 Stored procedures in Universal Cache architectures
- •4.4 Integrate an existing application to work with solidDB UC
- •4.4.1 Programming interfaces used by the application
- •4.4.2 Handling two database connections instead of one
- •4.5 Data model design
- •4.5.1 Data model design principles
- •4.5.2 Running in-memory and disk-based tables inside solidDB
- •4.5.3 Data model design for solidDB UC configurations
- •4.6 Data migration
- •4.7 Administration
- •4.7.1 Regular administration operations
- •4.7.2 Information to collect
- •4.7.3 Procedures to plan in advance
- •4.7.4 Automation of administration by scripts
- •Chapter 5. IBM solidDB high availability
- •5.1 High availability (HA) in databases
- •5.2 IBM solidDB HotStandby
- •5.2.1 Architecture
- •5.2.2 State behavior of solidDB HSB
- •5.2.3 solidDB HSB replication and transaction logging
- •5.2.4 Uninterruptable system maintenance and rolling upgrades
- •5.3 HA management in solidDB HSB
- •5.3.1 HA control with a third-party HA framework
- •5.3.2 HA control with the watchdog sample
- •5.3.3 Using solidDB HA Controller (HAC)
- •5.3.4 Preventing Dual Primaries and Split-Brain scenarios
- •5.4 Use of solidDB HSB in applications
- •5.4.1 Location of applications in the system
- •5.4.2 Failover transparency
- •5.4.3 Load balancing
- •5.4.4 Linked applications versus client/server applications
- •5.5 Usage guidelines, use cases
- •5.5.1 Performance considerations
- •5.5.2 Behavior of reads and writes in a HA setup
- •5.5.3 Using asynchronous configurations with HA
- •5.5.4 Using default solidDB HA setup
- •5.5.5 The solidDB HA setup for best data safeness
- •5.5.6 Failover time considerations
- •5.5.7 Recovery time considerations
- •5.5.8 Example situation
- •5.5.9 Application failover
- •5.6 HA in Universal Cache
- •5.6.1 Universal Cache HA architecture
- •5.6.2 UC failure types and remedies
- •6.1 Performance
- •6.1.1 Tools available in the solidDB server
- •6.1.2 Tools available in InfoSphere CDC
- •6.1.3 Performance troubleshooting from the application perspective
- •6.2 Troubleshooting
- •Chapter 7. Putting solidDB and the Universal Cache to good use
- •7.1 solidDB and Universal Cache sweet spots
- •7.1.1 Workload characteristics
- •7.1.2 System topology characteristics
- •7.1.3 Sweet spot summary
- •7.2 Return on investment (ROI) considerations
- •7.2.1 solidDB Universal Cache stimulates business growth
- •7.2.2 solidDB server reduces cost of ownership
- •7.2.3 solidDB Universal Cache helps leverage enterprise DBMS
- •7.2.4 solidDB Universal Cache complements DB2 Connect
- •7.3 Application classes
- •7.3.1 WebSphere Application Server
- •7.3.2 WebLogic Application Server
- •7.3.3 JBoss Application Server
- •7.3.4 Hibernate
- •7.3.5 WebSphere Message Broker
- •7.4 Examining specific industries
- •7.4.1 Telecom (TATP)
- •7.4.2 Financial services
- •7.4.3 Banking Payments Framework
- •7.4.4 Securities Exchange Reference Architecture (SXRA)
- •7.4.5 Retail
- •7.4.6 Online travel industry
- •7.4.7 Media
- •Chapter 8. Conclusion
- •8.1 Where are you putting your data
- •8.2 Considerations
- •Glossary
- •Abbreviations and acronyms
- •Index
The durability level can be set as a server default, per session, or per transaction.
Strict durability: synchronous logging
With strict durability, transaction logging is synchronous: the transaction is written to the transaction logs as soon as the transaction is committed.
Relaxed durability: asynchronous logging
With relaxed durability, transaction logging is asynchronous: solidDB is permitted to defer the transaction write until the server is less busy, or until it can write multiple transactions together.
In a server that is not part of a HotStandby pair, using relaxed durability means that you risk losing the most recent few transactions if the server terminates abnormally. If the server is part of a HotStandby pair, a copy of the transaction is on the other server (the Secondary); even if the Primary server fails before logging the transaction, the transaction is not lost. Thus, when relaxed durability is used with HSB, relaxed durability causes little reduction in safety. On the other hand, relaxed durability can improve the performance of the system, especially in situations where the server load consists of a large number of small write transactions.
Adaptive durability
Adaptive durability applies only to HotStandby Primary servers. Adaptive durability means that if the server is in Primary Active state (sending transactions to the Secondary), it will use relaxed durability. In any other state it will use strict durability. This gives you high performance (with little loss of safety) when HSB is active, yet maintains high safety if only one server is operating. Adaptive durability is effective only when the HotStandby has been set to use a 2-safe replication: the Primary server does not tell the client that the transaction has been successfully committed until the Primary receives acknowledgement that the Secondary has the transaction.
2.6 solidDB SQL extensions
The SQL support in solidDB is comparable to any advanced SQL-based system; solidDB offers the most commonly expected features and a set of useful extensions employing solidDB-specific (nonstandard) SQL syntax. Additionally, procedural SQL extensions such as stored procedures and triggers enable moving parts of the application logic into the database. These extensions help reduce network traffic, thus improving performance.
Chapter 2. IBM solidDB details 31
2.6.1 solidDB SQL standard compliance
No commercial relational DBMS fully supports the SQL standard beyond the SQL-92 Entry Level, and solidDB is no exception. The full standards known as SQL-92, SQL-99, and SQL 2003 are too broad to be implemented in a cost-efficient manner.
solidDB supports the SQL-92 Entry Level fully and has adapted selected features from the broader standards. An example of advanced standard features is the possibility to manage table constraints dynamically by using the ALTER TABLE syntax.
In addition to standard features, solidDB also borrows suitable, nonstandard solutions from other proprietary products. Examples are as follows:
START WITH ... CONNECT BY syntax for calculating hierarchical queries
LIMIT clause for limiting the size of the result set
2.6.2Stored procedures
Stored procedures are simple programs, or procedures, that are compiled and parsed after and then stored in the database for future execution. Because stored procedures are stored and executed directly in the server, usage of stored procedures reduces network traffic and can thus improve performance. For example, complex, data-bound transactions may be run on the server itself.
You can create a procedure that contains several SQL statements or a whole transaction and execute it with a single call statement. In addition to SQL statements, 3GL type control structures can be used enabling procedural control. You can also create nested stored procedures where one procedure is executed from within another.
Stored procedures can also be used for controlling access rights and database operations. Granting execute rights on a stored procedure automatically invokes the necessary access rights to all database objects used in the procedure.
Therefore, administering database access rights may be greatly simplified by allowing access to critical data through procedures.
Stored procedures are created and called using SQL statements.
32 IBM solidDB: Delivering Data with Extreme Speed
The three calling methods for the stored procedures are local, remote and deferred stored procedures:
Local procedures are executed on a local database server.
Remote procedures are procedures that are stored on one server and called by another. Remote stored procedures are applicable only to advanced replication setups.
Deferred procedures are procedures that are called after commit has been processed.
2.6.3Triggers
A trigger is a mechanism for executing a series of SQL statements when a particular action (an INSERT, UPDATE, or DELETE) occurs. The trigger contains SQL statement that need to be executed when the trigger is invoked. Triggers are created using solidDB proprietary stored procedure syntax.
You can create one or more triggers on a table, with each trigger defined to activate on a specific INSERT, UPDATE, or DELETE command. When a user modifies data within the table, the trigger that corresponds to the command is activated.
You can use only inline SQL or stored procedures with triggers. If you use a stored procedure in the trigger, the procedure must be created with the CREATE PROCEDURE command. A procedure invoked from a trigger body can invoke other triggers.
Triggers enable you to perform the following tasks:
Implement special integrity constraints, such as checking that certain conditions are maintained, for example, to prevent users from making incorrect or inconsistent data changes.
Take action based on the value of a row before or after modification.
Transfer much of the logic processing to the back end, reducing the amount of work that your application needs to do and reducing network traffic.
2.6.4Sequences
Sequences are objects that are used to get sequence numbers in an efficient manner. Sequence objects can be used, for example, to generate primary key numbers. The advantage of using a sequence object instead of a separate table is that the sequence object is specifically fine-tuned for fast execution and requires less overhead than normal update statements.
Chapter 2. IBM solidDB details 33
By default, solidDB sequences are sparse. Being sparse means that there is no guarantee that the generated sequence numbers are consecutive (they are, however, unique). Another possibility is a dense sequence. In that case the generated sequence numbers follow each other. The penalty of dense sequences is that they are locked by the transactions incrementing them, so no two transactions can increment the same sequence in the same time. One of the transactions must wait until the other transaction commits or aborts. Sparse sequences are more performant because they are not locked by the incrementing transactions.
Sequence objects are created with the CREATE SEQUENCE or CREATE DENSE SEQUENCE statement. Sequence values can be incremented and used within SQL statements using the sequence_name.CURRVAL and sequence_name.NEXTVAL constructs. Sequences can also be used inside stored procedures.
2.6.5 Events
Event are database objects that are used to signal events in solidDB databases. Together with stored procedures, events can be used for automating administrative tasks. You can make your application use event alerts instead of polling, which uses more resources.
The events mechanism is based on one connection waiting on an event until another connection posts that event. More than one connection may wait on the same event. If multiple connections wait on the same event, all waiting connections are notified when the event is posted. A connection may also wait on multiple events, in which case it will be notified when any of those events are posted.
In addition to system events, solidDB supports also user-defined events. However, user-defined events can only be used within stored procedures; system events can also be used without stored procedures. The events are managed using SQL statements.
34 IBM solidDB: Delivering Data with Extreme Speed
