In the first versions of SQL Server (4.21a through 6.5) there was a feature with the self-explanatory name “Tempdb in RAM”. Due to the immature state of memory management in SQL Server at that time, performance could sometimes be improved by instantiating tempdb in memory rather than creating it on disk.
With the release of SQL Server 7, the option to put tempdb in RAM disappeared. The reason was that, with the better memory management in SQL 7, Microsoft felt this feature was no longer needed. Just giving that memory to SQL to manage was thought to be a better use of physical memory, which in those days was expensive. A lot of production SQL instances had only a gigabyte or two of RAM to work with.
That Was Then. This is Now
Now, with the release of SQL Server 2014, the SQL Team included a feature to create tables that live in memory. When I first read the name Memory Optimized Tables, I thought Microsoft might be dredging up the old ram disk technology or pinning tables in memory, but a quick look at the details convinced me that this was an entirely different kind of animal and, on closer examination, I believe that this technology has tremendous potential to improve database performance.
As well as memory-optimized tables, this technology also gives us memory-optimized stored procedures and indexes. At the heart of this panoply of memory-optimized objects is Native Compilation, new in SQL 2014 and only useful for memory-optimized objects.
The term Native Compilation refers to the process of compiling memory-optimized tables and procedures to native code consisting of processor instructions that are executed directly by the CPU. The result of native compilation is a DLL that runs in the SQL Server memory space and will execute much more quickly than conventional TSQL interpreted code.
Memory-optimized stored procedures are likewise compiled into low-level processor instructions and housed in a DLL. They also execute much more quickly than interpreted TSQL code. But it is well beyond the scope of this article to describe the underlying technology in detail. We will instead just mention the potential benefits that this technology might give you.
What Can Memory-Optimized Tables Do for Me?
The most noticeable performance improvements will be in CPU use. Here is what Microsoft says:
In-Memory OLTP can help reduce the hardware investment in scaled-out workloads because one server can potentially deliver the throughput of five to ten servers.
If that proves true, it will be a huge benefit.
Disk IO may be improved due to the way that logging and checkpointing occurs in memory-optimized objects. However, improvements are not as certain and somewhat dependant on having a lot of memory in your server. Memory optimized objects can only work with data in memory.
Memory requirements are not reduced by the use of memory optimized objects, in fact you will need a lot more memory since all the data in memory-optimized tables must fit in main memory. However, the overall effect of memory optimized tables and procedures will normally be much faster performance for performance-critical tables.
Storage is another area in which memory-optimized tables offer little benefit. Memory-optimized tables do not exist only in memory. They also have an on-disk footprint to insure recoverability and data consistency. Sometimes, in fact, the on-disk component can be larger than the table in memory.
Increased scalability may be the most important benefit offered by this new technology. Most scalability issues are due to increased contention between more and more processes running on the same server. At the SQL engine level contention issues have three components, latching contention, spinlock contention and locking-related contention.
Memory-optimized OLTP does not use latches at all to retrieve data, so that completely eliminates latching or spinlock contention. Memory-optimized OLTP also uses a new form of optimistic isolation that eliminates blocking between reading and writing processes. It differs from existing snapshot modes in that it does not store versions in tempdb.
The downside to this isolation model is that when there is contention between two writing processes, (For example two concurrent transactions trying to update the same row), one is allowed to complete and the other fails. Your application code must deal with the failure, usually by resubmitting the change. This means that legacy apps that were written for pessimistic isolation scenarios must be changed to trap these failures and take appropriate action.
Is It For Me?
Although I am pretty excited about it, I wouldn’t plan to deploy memory-optimized technology in a production environment before at least one Service Pack has been released to clean up the inevitable bugs. Also, you must consider carefully the issue of the isolation model. Legacy apps may have to undergo a significant re-write to use this technology. Applications that experience frequent write\write blocking may not be candidates for this technology.
Is This the Future of Database Programming?
Almost from the origin of relational databases there has been a problem sometimes called the impedance mismatch. The mismatch is that database applications are written in modern object-oriented, compiled languages, but the database code is written in the 50 year old SQL language that is interpreted, not compiled. There has always been an uneasy relationship between the two diametrically opposed programming models.
Database developers have longed for the ability to write their database code in the same language that they write their application code. Many object\relational tools have evolved to attempt to solve this problem, but all of the tools developed so far have a performance overhead and have not been able to match the performance of SQL itself. Perhaps Memory-Optimized technology is a step in the direction of solving this mismatch, if not in this release then perhaps a later release.
This has been a quick look at a new and very promising technology. However, SQL 2014 has not been released and I am writing without proper testing. I am dependent on the accuracy of Microsoft’s documentation.