Caching seems to make no difference
What puzzles me though is the effect of caching or rather the lack of any effect whatsoever. I was expecting a significant reduction in page-load time upon returning to a previously visited page. Instead I experience the same time as for a never visited one.
I would really appreciate a brief explanation of how the caching works and what it does. I imagined it being something similar to eAccelerator.
I turn caching on and off and notice no difference at all. Am I missing something here?
I was imagining that the caching function would store readily compiled pages in html format in order to reduce page loading time and load on the server. Isn't it so?
I would really appreciate just a very brief explanation or perhaps a link to some ready source of information.
I have it turned off now, but when it goes public I will turn it on and see what happens.
I hope this gives you some insight on it. If I am wrong, please feel free to correct me :)
So you are saying that pages have to be re-compiled each time they are requested, even as the same user is browsing back and forth on a site? It can't be...
And how about the browser's cache - is it inactive for CMS sites? Why do I measure the same time when returning to an already visited page?
Browser cache is user depended, but I think you can change that through meta tags.
You can also go much further and think about all the cache information that might be saved on servers that your sites content is send through to reach the server, but I am guessing they are not doing that for your site ;)
On a fast server solution, which i suppose this must be, it doesn't really matter, but on a shared one it would make tremendous difference if the pages were cached in their compiled state during a user session - wouldn't it?
Can't that happen?
When I used to have html only sites, the server only had to feed the data to users.
So pretty much, you can host the sites on any cheap server.
However, when I started using the CMS 5 years ago, I realized that I cannot just use cheap hosting services any more.
This is because concrete5 (and other CMSs) are actually the programs. It process the data and write & load MySQL database.
I now have to think about the speed of the server according to the number of users.
I can no longer use $1/mo hosting... unfortunately.
I need to find the affordable price yet fast enough server.
You either choose the speed to update the web or price (update the web using software and upload the file via FTP).
Anyway, I don't know what kind of hosting you are using.... but I just wanted to share my general idea of choosing the hosting for CMS.
I use a shared hosting platform and have had little speed issues. Granted it certainly is faster than a C5 optimized shared server and way slower than a dedicated server.
So if your site isn't a mondo site, most shared hosting services will do.
There are a couple different kinds of caches at play here. The first one is your browser's cache. You're right, we should be using that to its fullest. Then, there's concrete5's internal object cache, which is what that setting in the dashboard controls. Here's how that system works:
Since there's a lot going on in concrete5, loading any given page means a lot of data is requested from the database. This includes information for the entire page, information for any areas on the page, detailed information for all blocks in any of those areas, and detailed information about permissions of who can see those blocks, etc... Individual blocks also make database calls (to retrieve their data). In our experience, the slowest part about any dynamic web application was its link to the database, especially when serving up lots of requests. So, the object cache basically takes a large number of those things that would have to be retrieved from the database, and saves them to the file system instead. Then, when they are requested, the file system is queried first, for the object, before going to the source database.
It's not perfect: as we continue to develop, we haven't been great about adding new features to the cache, so things like pages and blocks are cacheable, but new stuff like our files, things like that, don't always use the cache (meaning there are still a hefty number of queries that concrete5 runs.). Additionally, on sites that aren't very busy, or sites that are really constrained in other ways, caching probably won't be that noticeable. In our benchmarks, it definitely is.
I could see extending the cache further: when people talk about full content or page caching, what they mean is, instead of just caching programming objects to the database, actually caching the full HTML generated by either a block or a page. This would result in an even greater speedup, for pages/blocks that aren't dynamic at all. This is on our radar.
Finally, I have seen one instance where, on a shared server, caching made the site slower. This was on MediaTemple's Grid Server. For some reason, the site had such poor file I/O that the act of reading that many files from the file system was slower than reading from the database. Granted, it wasn't much slower, and the whole site itself was really, really slow, but it was noticeable. concrete5 is not the lightest CMS there is, but most shared hosting solutions run it with no problem, including Godaddy. Even MediaTemple is better at it, these days.
Hope this answers your question – please keep them coming.
As I have understood it there is potential for increased performance through the use of clever caching algorithms.
I can fully understand that in the case of dynamic content, caching becomes more tricky and entire pages are seldom possible to reuse.
But I think the majority of websites are static in nature, meaning that the pages look the same each time you browse back to them. Wouldn't it be a great idea to cache the entire pages then? Just a thought.
But there may be a problem for not tech guys on the client side. I don't know if they would all understand the concept of static content and freezing (for example a page with a guestbook shouldn't be freezed).
And there would be also issues with navigation. If it's chenged - freezed pages would need to be recompiled. If one edit a block in global scrapbook - all freezed pages with that block also should be recompiled.
But I think it's good concept. It would make a much better performance on some servers. But might be not easy to be developed.
I'm not clear on how the HTTP expire header is set, but it would seem that this could be used to good advantage to address this issue, at least as an option.
Visitors access the site in visitor or admin mode.
It would seem that in visitor mode, if the option were chosen, an expire date time header could be sent a half-hour in the future.
In admin mode, the page could expire immediately and thus be refreshed if changes were made.
Just a thought...