I’d like to talk about some advanced cache updates. Say we have this problem where multiple people submit posts to Ascii Chan at the same time, or submit to your blog at the exact same time. You know, let’s say they hit multiple app servers, or different app servers. They both write to the database at the same time, and then they both update the cache at the same time. Overwriting each other. This can happen. This is a type of race condition where you know, two updates come in at the same time, and we don’t know what order to handle them in and they both squawk them, basically one tramples the other. If we to draw this in a picture, it looks something like this. Say we’ve got two apps servers. Here’s an example of the problem. And we’ve got our database. And we’ve got our cache. And say we’ve got el, elements in our database. We’re going to call them one and two. And these app servers are both going to get requests from the user to, to submit a new entry into our database, you know, our new piece of art for ASCIIchan. And this happens at the same time. So this guy may submit element three. And this guy may submit element four. So our database is, you know, in sync because databases enforce these constraints. You know, you can insert as many things as you want at the same time the database will order it all for you. But here’s the problem. Let’s start with version one of this problem which is. Each of these app servers is manipulated in the cache directly. So this guy inserts element 3, and so he writes to the cache and he says, the database looks like this. And just as he does this, this app server finishes inserting item 4 into the database and he is not communicating with this app. He overwrites the cache to instead look like this. 1,2,4. Because this app server, remember we’re not doing another query form the database, we’re just manipulating the cache direectly. This app server squashed the other app server’s update. That’s, that’s a problem. Now, let’s look at another way this problem can ahppen. Say we were using the the first approach we talked about where we, when we write to the database we immediately do a read from the database and update our cache that way. We can still have this problem. First, this App server inserts element 3 into the data base. And then it says okay you know, let’s, let’s rerun the query, so we, so we can update the cache. But at the same time, this other App server Inserts element four into the database, and it also wants to rerun the query. So, each of these, you know, has a view of the database. This first app server thinks it’s 1, 2, 3, and this other guy thinks it’s 1, 2, 3, 4. Now, there’s no guarantee this first app server will write to the cache for this one. because, things can happen out of order. Your app server can have a slight delay, you know, there can be a network glitch. There’s going to be any number of reasons why this app server might run. You know might have a little hiccup and this guy writes you know, or, or, or this guys gets the right to the cache first. 1,2,3,4. And this other guy comes in and tramples on top of him. So there are a handful of ways where these app servers can you know, overwrite each other and if we, if, if we are redirecting the user to our front page to do the cache update that way. The, the odds of this happening is even more likely because it’s not going to happen quite so fast, we’ve gotta go all the way to the user and back before we update our cache. So, that’s the problem, you know, multiple app servers overriding each other in the cache because the cache doesn’t have any transactions, it doesn’t have any of the fancy stuff that the data base has. You know on that base we juts say insert this element but on the cache we say the list of elements is this so we can’t just answer it at the front of it let me introduce one solution to this problem.