How would one design a database today?
thenmemory scarce, open, read, write
today64bit, mmap, sparse files

Localmemcache is a library for C and Ruby that aims to provide an interface similar to memcached but for accessing local data instead of remote data. It's based on mmap()'ed shared memory for maximum speed. Since version 0.3.0 it supports persistence, also making it a fast alternative to GDBM, Berkeley DB, and Tokyo Cabinet.

Version 0.4.4: Bugfixes for OS X and Autorepair

Version 0.4.4 brings fixes for coredumps on OS X and bugs in the autorepair (now also better covered by tests).
New methods: shm_status, has_key?
(Thanks to Max Schöfmann and Florian Dütsch for feedback/bug reports.)

Previous Releases

Install

The Ruby binding is available as a Ruby Gem. It can be installed by executing

gem install localmemcache
If you just want to use the C API, download the .tar.gz from here.

Requirements

  • a >=64bit Unix (32bit is possible but you'll run out of virtual address space quickly)
  • a file system that offers sparse files
  • Note for OS X: OS X disqualifies as HFS+ doesn't have sparse files and sem_timedwait() and sem_getvalue() aren't supported as well.
    Note for FreeBSD: It has been reported that localmemcache sometimes hangs there, it is not yet clear what the problem is.

    Using

    APIC|Ruby

    require 'localmemcache'
    # 1. the memcached way
    # $lm = LocalMemCache.new :namespace => :viewcounters
    # 2. the GDBM way
    #$lm = LocalMemCache.new :filename => "./viewcounters.lmc"
    # 3. Using LocalMemCache::SharedObjectStorage
    $lm = LocalMemCache::SharedObjectStorage.new :filename => 
        "./viewcounters.lmc"
    $lm[:foo] = 1
    $lm[:foo]
    $lm.delete(:foo)
    
    
    (C version of this example: hello.c)

    Performance

    Here's a quick speed comparison, made on an Intel(R) Xeon(R) CPU E5205 @ 1.86GHz:

    Ruby benchmark pseudo code:
    2_000_000.times {
      index = rand(10000).to_s
      $hash.set(index, index)
      $hash.get(index)
    }
    
    MemCache:              253,326.122 ms
    GDBM:                   24,226.116 ms
    Tokyo Cabinet:           9,092.707 ms
    Localmemcache 0.4.0:     5,310.055 ms
    Ruby Hash of Strings:    4,963.313 ms
    
    (Code of the benchmarks used)

    So, on my machine, using localmemcache 0.4.0 to store key-value data on disk is about 10% slower than keeping them in memory in a Ruby hash of strings. It's about 40% faster than Tokyo Cabinet (which offers features similar to LocalMemCache).

    Who uses Localmemcache?

    Personifi use localmemcache to serve billions of hits each month. Armin Roehrl: "we use localmemcache because it solves one problem very well and we love it!"

    Slides for my talk at the Ruby on Rails Group Munich

    Now available on github  < pdf | key > (German)

    Source code

    The source code is hosted on github. It can be retrieved by executing

    git clone git://github.com/sck/localmemcache.git
    

    Caveats

  • Localmemcache's .lmc files are not binary compatible across different CPU architectures, they are essentially memory mapped c structs
  • Because of the convenient auto repair feature after a lock timeout, localmemcache is allergic to SIGSTOP (If you manage to SIGSTOP a process while localmemcache is currently possessing a lock, that is)
  • Tips for backups

    Note that you cannot copy localmemcache's .lmc files while other processes are making changes to the data, this will likely result in a corrupt backup. So you need to make sure that none of your processes are writing during the time you do an backup. As for copying sparse files, cp recognizes them automatically, with tar you need to use the -S option.

    Read on / RDoc

    License

    Copyright (c) 2009 Sven C. Koehler (schween at s n a f u dot de)
    Localmemcache is freely distributable under the terms of an MIT-style license.