translation

This is an AI translated post.

제이온

[DB] Criteria for setting up a cache

Select Language

  • English
  • 汉语
  • Español
  • Bahasa Indonesia
  • Português
  • Русский
  • 日本語
  • 한국어
  • Deutsch
  • Français
  • Italiano
  • Türkçe
  • Tiếng Việt
  • ไทย
  • Polski
  • Nederlands
  • हिन्दी
  • Magyar

Summarized by durumis AI

  • Caching is a technique that improves service efficiency by storing data that is frequently read and has a low write frequency, and an APM like DataDog can be used to analyze RDB query call history to select caching targets.
  • Local caching is a method of storing caching data in the memory of the application server. It is fast, but can cause cache synchronization issues between instances. Global caching stores caching data on a separate server like Redis, which allows sharing between instances but can slow down due to network traffic.
  • TTL (Time To Live) is used to update the cache. If read/write occurs on a single server, a TTL of over time units is generally set, and if it occurs on multiple servers, a TTL of seconds to minutes is generally set.

Hello? This is Jayeon.

Today, I will explain the criteria for setting up a cache. This is a post written based on my personal experience in the field, so please take it as a reference. ㅎㅎ


What is Cache?

A cache is a way to pre-store the results of requests that will be made later for fast service. In other words, it is a technique that pre-stores the results and handles requests by accessing the cache instead of referring to the DB or API when a request comes in later. The background of this cache is the Pareto principle.

The Pareto principle states that 80% of the results are caused by 20% of the causes. It would be helpful to refer to the following image!



In other words, caching does not require caching all results, and by caching only the 20% that is frequently used when providing services, overall efficiency can be improved.


What data should be cached?

According to the Pareto principle, you shouldn't cache any data, but only the data that is absolutely necessary. So, what kind of data should be cached?


Data that needs to be read frequently but rarely written

In theory, it is often said that “data that needs to be read frequently but rarely written should be cached,” but the criteria for “frequent reading” and “rare writing” were quite vague.


So, I investigate the data to be cached in the following steps.


  • Check the top 5 RDB query call histories through APMs such as Data Dog.
  • Among them, find the query and check which table it is queried from.
  • Check how often the update query for the table is called.


Through this process, we check whether there are many queries but few update queries. The table I checked in the field had 1.74 million query calls per day, but the update queries were at most 500 times. This is clearly a suitable condition for caching, ㅎㅎ.


Data sensitive to updates

Data sensitive to updates means that the inconsistency between the RDB and the cache should be short. For example, payment information is very sensitive to updates, so even if it meets the above caching conditions, we need to consider applying it.


I had to cache payment-related tables that met the two characteristics above. So, I did not apply caching to all logic that uses the payment-related table, and decided to partially cache it in relatively safe logic where payments do not actually occur.


Local Caching vs Global Caching

Now we have roughly determined the data to be cached and the scope of caching. Then, we need to consider “where” to store the cached data. Generally, you can store it in local memory or on a separate server like Redis.


Local Caching

Local caching is a method of storing cached data in the memory of the application server. Generally, Guava cache or Caffeine cache is widely used.


Advantages

  • It is fast because it accesses the cache from memory within the same server while performing application logic.
  • It is easy to implement.


Disadvantages

  • Several problems arise when there are multiple instances.
    • Cache changes made in one instance cannot be propagated to other instances. However, there are local caching libraries that propagate changes to other instances.
    • Since the cache is stored in each instance, the cache must be re-entered when a new instance appears. This can cause a lot of cache misses, resulting in traffic that cannot be handled and the instance may die.


Global Caching

Global caching is a method of using a separate server such as Redis to store cached data.


Advantages

  • Since the cache is shared among instances, even if the cache is modified in one instance, all instances can get the same cache value.
  • Even if a new instance appears, it can look at the existing cache repository, so there is no need to fill in the cache.


Disadvantages

  • It is slower than local caching because it has to go through network traffic.
  • Separate cache servers need to be used, which incurs infrastructure management costs.
    • Infrastructure management cost? → Server fees, time spent setting up and maintaining infrastructure, and planning for disaster response, etc.


What did I choose?

Currently, the company's application server has a structure that runs multiple instances, but I chose local caching.

There are three main reasons.


  • The data to be cached stored in the RDB is about 40,000, which is less than 4MB even if it is all loaded into memory.
  • The query performance for payment-related data needed to be good.
  • Redis already exists, but storing a new cache in Redis incurs infrastructure costs.


How to update the cache?

If there are multiple application servers and local caching is applied to them, the cached values stored in each application server may differ. For example, the cached data stored in server A is “1”, but the cached data stored in server B may become “2” after being changed in server B. In this situation, if the user sends a request to the load balancer, they will receive different values from server A and server B.


Therefore, it is necessary to configure each instance to automatically remove the cache and retrieve it from the RDB, and TTL is mainly used for this.


How long should the TTL be set?

TTL is an acronym for Time To Live, and it is a setting to delete the cache after a certain time has passed. For example, if the TTL is set to 5 seconds, the cached data will be automatically deleted after 5 seconds. After that, if a cache miss occurs, data is retrieved from the RDB and stored.

So, how long should the TTL be set?


read/write occurs on one cache server

If read/write occurs on a single global caching server like Redis, or on a single application server with local caching applied, the TTL value can be raised to the time unit or higher. After all, the existing cache will be modified when writing, and the server that retrieves data from that cache will always see the updated data.


In this case, instead of setting the TTL, the cache server can be configured to automatically clear the cache little by little using the LRU algorithm when the cache server is full.


read/write occurs on multiple cache servers

If read/write occurs on multiple global caching servers, or on multiple application servers with local caching applied, it is better to use TTL in the order of seconds ~ minutes. This is because there is a possibility of reading old data from the cache server that has not yet reflected the modified data.


At this time, TTL is determined in various contexts. The more important the update is and the higher the probability of value changes, the shorter the TTL should be. The less important the update is and the lower the probability of value changes, the longer the TTL can be.


How did I set the TTL?

The data I will cache is payment-related data. Even if caching is not applied to the strict logic where actual payments occur, it is important to update due to the nature of payments. However, the possibility of updates is low, so I set the TTL to about 5 seconds for safety.


Conclusion

To summarize, the caching method I chose is as follows.


  • Payment-related data
  • Queries are very frequent, but modifications are rarely made.
  • Caching is applied only to logic that performs queries, but not where payments actually occur.
  • Local caching is applied, and the TTL is set to 5 seconds.


The next step is to conduct performance testing for the caching method that was actually applied. I am still thinking about how to conduct the performance test in detail, so I will write about it in a later post!

제이온
제이온
제이온
제이온
[Effective Java] Item 6. Avoid Unnecessary Object Creation This guide provides advice on how to reduce unnecessary object creation in Java. For immutable objects like String and Boolean, use literals, and for regular expressions, it is better to cache Pattern instances. Also, autoboxing can lead to performance de

April 28, 2024

[Java] Synchronized Collection vs Concurrent Collection This article compares and analyzes various methods and their pros and cons for solving synchronization issues when using collections in a multithreaded environment in Java. It introduces the characteristics and performance differences between synchronized

April 25, 2024

[Effective Java] Item 1: Consider Static Factory Methods Instead of Constructors Static factory methods provide a flexible and efficient way to create instances instead of constructors. They can have names, return instances that meet specific conditions, and improve performance through caching. Unlike the singleton pattern, they can c

April 27, 2024

Physical Data Modeling Physical data modeling is the process of designing tables in a relational database for actual use. It aims to optimize performance through storage space efficiency, data partitioning, and index design. Performance issues can be addressed through slow quer
제이의 블로그
제이의 블로그
제이의 블로그
제이의 블로그
제이의 블로그

April 9, 2024

[Non-Computer Science, Surviving as a Developer] 14. Summary of Frequently Asked Technical Interview Questions for New Developers This is a technical interview preparation guide for new developers. It explains concepts frequently encountered in interviews such as the main memory area, data structures, RDBMS and NoSQL, procedural and object-oriented, overriding and overloading, page
투잡뛰는 개발 노동자
투잡뛰는 개발 노동자
투잡뛰는 개발 노동자
투잡뛰는 개발 노동자

April 3, 2024

Redis 7.4 - License Policy Changes Redis is a memory-based database that offers fast speeds and easy data processing. Recently, it has changed its license policy, requiring cloud service providers who host Redis products to enter into license agreements. General developers can use Redis Co
해리슨 블로그
해리슨 블로그
해리슨 블로그
해리슨 블로그

March 21, 2024

[Concurrency] Atomic Operation: Memory Fence and Memory Ordering This blog post explains how to consider memory order in atomic operations, and the importance of the Ordering option. It provides a detailed explanation of various Ordering options such as Relaxed, Acquire, Release, AcqRel, SecCst, along with the advantag
곽경직
곽경직
곽경직
곽경직
곽경직

April 12, 2024

#Marketing - Analysis for marketing tells you the present. Before establishing a marketing strategy, it is essential to have an accurate understanding of your company, competitors, and customers through 3C analysis. 3C analysis helps in setting marketing direction, setting goals, and developing differentiation st
30대의 존버살이를 씁니다.
30대의 존버살이를 씁니다.
30대의 존버살이를 씁니다.
30대의 존버살이를 씁니다.
30대의 존버살이를 씁니다.

January 18, 2024

Will my durumis blog be exposed to search engines? This article explains how to check if your blog post is indexed by Google search engine after you have written it, and Google's strict evaluation criteria for YMYL (Your Money Your Life) content. Especially for topics that can have a direct impact on user
INFOWIKI
INFOWIKI
INFOWIKI
INFOWIKI
INFOWIKI

April 13, 2024