Recently, I have been looking for ways to optimize my Stream Closed Captioner application. During one of my typical scroll-throughs of my Following lists and other sites, I discovered Nebulex.
Nebulex, a local and distributed caching toolkit for Elixir.
Like with many other caching toolkits, Nebulex covers all the basics.
This article is not a deep dive into Nebulex features but an example of how I used it in my own application.
Nebulex Basics
Inserting (with time to live):
Blog.Cache.put(key, data, ttl: :timer.hours(1))
Retrieving:
Blog.Cache.get(key)
Counters:
Blog.Cache.incr(:my_counter)
Blog.Cache.incr(:my_counter, 5)
Deleting:
Blog.Cache.delete(1)
Now, this is a partial list of Nebulex capabilities. There are several more functions to interact with the cache, covering many more use cases you might have.
But Why Choose Nebulex?
While looking at Nebulex, two things caught my eye that would make Nebulex easily fit into any Phoenix implication (in my opinion).
The easy setup using a mix task
Nebulex implementation of Cache as System of Record
Easy Setup
Starting with Nebulex is extremely easy because of their handy dandy mix task. Just run:
mix nbx.gen.cache -c Blog.Cache
This command will generate the cache configuration in your config.exs and create a cache module. The last step is to add the cache module to your supervision tree. This step is called out in the docs and task output.
Cache as System of Record (Cache-as-SoR)
This feature caught my eye and is perfect for how I want to use the cache.
If you need to learn what Cache as System of Record is, here is an excerpt from the Nebulex docs.
The cache-as-SoR pattern implies using the cache as though it were the primary system-of-record (SoR).
The pattern delegates SoR reading and writing activities to the cache, so that application code is (at least directly) absolved of this responsibility. To implement the cache-as-SoR pattern, use a combination of the following read and write patterns:
Read-through
Write-through
Advantages of using the cache-as-SoR pattern are:
Less cluttered application code (improved maintainability through centralized SoR read/write operations)
Choice of write-through or write-behind strategies on a per-cache basis
Allows the cache to solve the thundering-herd problem
To better understand what this means, here is a code example of implementing a read-through pattern in my project:
@decorate cacheable(
cache: Cache,
key: {StreamSettings, user_id}
)
def get_stream_settings_by_user_id(user_id) do
Repo.get_by(StreamSettings, user_id: user_id)
end
In my Settings module, the get_stream_settings_by_user_id
is called every time a caption payload is broadcasted to viewers to check if the broadcaster wants to add additional profanity filters or other text modifiers. Nebulex offers the cacheable
decorator function that will cache the result of my get_stream_settings_by_user_id
function using a specified key. If data already exists in the cache and hasn't expired, it will return the cached result the next time this function is called.
In the above code, I am caching StreamSettings
by the user_id
and saving time by avoiding trips to the database till the cache entry expires. I love how easy it is to set up caching on any function I use.
But what happens if the user's StreamSettings are updated? That's where the write-through pattern's decorator comes in! Let's look at some code again:
@decorate cache_put(
cache: Cache,
key: {StreamSettings, stream_settings.user_id}
)
def update_stream_settings(%StreamSettings{} = stream_settings, attrs) do
stream_settings
|> StreamSettings.update_changeset(attrs)
|> Repo.update()
end
cache_put
is the magic function here. It will update the cache with the result of update_stream_settings
using the key specified, {StreamSettings, stream_settings.user_id}
. I like how Nebulex makes it simple to wrap your existing functions with cache reads and writes.
But the value you are caching in your read-through is different from the value you are caching in your write-through function!?!?
Good Catch! Well, Nebulex also has a function for that. If the value you are caching using the cacheable
function is a different shape from the value that the cache_put
function, you can use the cache_evict
function. Let's go through an example:
# Cache - key:{StreamSettings, 123}, value: nil
get_stream_settings_by_user_id(123)
# Return: %StreamSettings{some_data}
# Cache - key:{StreamSettings, 123}, value: %StreamSettings{some_data}
Calling get_stream_settings_by_user_id
places takes the returned value and places it in the cache, going from nil
--> %StreamSettings{some_data}
update_stream_settings(stream_settings, %{update_data})
# Return: {:ok, %StreamSettings{some_updated_data}}
# Cache - key: {StreamSettings, 123}, value: {:ok, %StreamSettings{some_updated_data}}
But the problem arises with the return value of update_stream_settings
. Instead of the result being a StreamSettings struct, it instead returns a tuple {:ok, %StreamSettings{}}
. But cache_evict
is here to save the day!
@decorate cache_evict( # <------- The change -------
cache: Cache,
key: {StreamSettings, stream_settings.user_id}
)
def update_stream_settings(%StreamSettings{} = stream_settings, attrs) do
stream_settings
|> StreamSettings.update_changeset(attrs)
|> Repo.update()
end
# Cache - key:{StreamSettings, 123}, value: nil
Instead of caching the new result from update_stream_settings
this will remove the value from the cache, removing the worry or mismatched cached data and instead letting the read-through function, aka cacheable
write to the action on the following invocation.
Conclusion
Nebulex's Cache as System of Record is a feature that fits right into my mental model of how I would like to work with a cache. Just decorate the functions I use throughout my applications and BAM caching with minimal configuration. Nebulex has more features; look at Nebulex if you're in the market for a caching toolkit.
Happy Coding!