Site hosted by Angelfire.com: Build your free website today!

An In-memory database also main memory database system or MMDB or memory resident database. It is a database management system that primarily relies on main memory for computer data storage. It is contrasted with database management systems that employ a disk storage mechanism. Main memory databases are faster than disk-optimized databases since the internal optimization algorithms are simpler and execute fewer CPU instructions. Accessing data in memory eliminates seek time when querying the data, which provides faster and more predictable performance than disk.

They are Three facts for In-memory database:

·         Databases that take advantage of in-memory processing really do deliver the fastest data-retrieval speeds available today, which is enticing to companies struggling with high-scale online transactions or timely forecasting and planning.

·         Though disk-based storage is still the enterprise standard, the price of RAM has been declining steadily, so memory-intensive architectures will eventually replace slow, mechanical spinning disks.

·         A vendor war is breaking out to convert companies to these in-memory systems. That war pits SAP -- application giant but database newbie -- against database incumbents IBM, Microsoft, and Oracle, which have big in-memory plans of their own.

The past few years were dominated by all major database vendors introducing and improving their database cluster products. There is the bread of shared nothing clusters like Microsoft SQL Server 2008 and there are the share everything clusters like Oracle and Sybase. It is amazing how far these technologies have come and how much we got used to "always available" databases. You know what's coming next. Now, that we have uninterrupted access to data, it would be great if we get the data faster. Well, the database vendors have an answer for that as well

In-memory database bypass this disk writing requirement and that's what improves the speed. Designed for high volume transaction systems, like e-commerce shopping carts, in-memory databases are unbeatable when it comes to writing transaction data. And this is fundamentally different to data caching of traditional database engines. Data caching improves read performance, but does nothing to improve write performance.

There is a downside to these databases as well; they offer alternatives to performance problems in poorly written applications. Like powerful hardware, in-memory database have the potential to mask poor application development. We might see an explosion of in-memory database implementations due to this matter.

Microsoft is still in the planning and rumor phase of providing an in-memory database for its next version of SQL Server. The code name for the next SQL Server upgrade is Kilimanjaro. This is the name to use when searching for upgrade information. It is not clear when the new SQL Server release will be available and it is not clear if it will be named SQL Server 2010. It depends if it gets out this year or not.

At first, database management systems are very crude, as there was always a memory problem with the earlier electronic computers. In fact, Bill Gates was quoted as saying in 1981 that 640K memory ought to be enough for anybody.

At first, these database management systems were very specific to the computer and to the user. IBM was one of the leaders in this category, but soon many clones and competitors entered into the marketplace, all at varying price points with different and alternative solutions.

With the advent of the 90s, the shift turned from having an accurate database management system, to having one that was easily maintainable. This is because memory capacity started to grow, as well as the creation and spread of information. This is when some of the more sophisticated database management systems at the market.