Multiple webservers to provide load balancing.

Permalink
HI,
I have a client that is hitting limits on a single webserver solution, We are exploring possible solutions to increase the available band-width for the website. One option being discussed is running a multihead environment where we load balance the incoming traffic onto multiple webservers that are sharing a common C5 database and shared nfs server for the file storage.
The site specific data is already separated out from the C5 database,.

I am not sure this is even possible...

Anyone have any experience or knowledge on this?

The C5 version is 5.6, and before people go down the upgrade to 5.7/8 yes that is being considered as well but just not here.

Thanks

FaganSystems
 
exchangecore replied on at Permalink Reply
exchangecore
Can you explain what limits you are hitting in your current environment and perhaps a little more about it?

Theoretically, setting up concrete5 to run via an NFS share is doable. That said in most cases multiple application servers aren't necessary (at least from a performance perspective). Keep in mind that by setting up an NFS share, you're also creating additional overhead on your disks and internal networks. If you're compute bound, odds are you can scale up far easier than you can scale out. And if you're reaching bandwidth limitations, chances are splitting the bandwidth between two hosts isn't going to solve the issue because they almost certainly have to be sitting next to one another in the rack (if not on the same physical hardware), to make them performant.

Some things to consider before you jump the gun into thinking you need to scale out:
What kind of web server are you running (and more importantly how is it set up to interact with php)?

What version of PHP?

Are you using OPCache?

Are you utilizing the full page cache in concrete5 to it's maximum potential?

What version of MySQL?

Using INNODB or MyISAM?

Is your database on the same server or a separate one?

Have you done your due diligence on tuning mysql?

Are the application/database servers shared with other applications?

With Concrete5 (especially concrete5.6 and prior) it's almost always easier and more cost effective to scale up, than it is to scale out.
FaganSystems replied on at Permalink Reply
FaganSystems
HI,
Thanks for your reply, as I said there are other threads internally that are looking at options that cover what you are asking about, for completeness I am happy to provide this detail.I am well versed with building multi-server systems that can perform such as Magento etc. I am a fan of distributed computing using proxy load balancers that will include https termination. These can run a HA configuration allowing us to bring online an additional webserver(s) to load balance. Our infrastructure specialists asked if it was possible to multihead the webservers hence my question here.

To answer you questions
When the webserver is hitting 300 connections per second with the potential for an additional 100 plus like it did today it is starting to fail to serve pages, Both to Guest visitors and to logged in users. The webserver is running apache which was struggling to service all of the connections, I was seeing all 24 cores running at close to 100%. When this was happening the server was using 9.8GB.of memory. We will be adding more memory as a short term solution but this can only be considered to be a quick and temporary fix, so a longer term solution needs to be found. During this period of high usage the network was running well within capacity and other servers on the 20GBe fibre backbone including the DB server were responsive.

What version of PHP?
5.6

Are you using OPCache?
Yes

Are you utilizing the full page cache in concrete5 to it's maximum potential?
Yes

What version of MySQL?
percona 5.6

Using INNODB or MyISAM?
INNODB

Is your database on the same server or a separate one?
NO on a seperate db server.

Have you done your due diligence on tuning mysql?
Yes this is one of my areas of expertise. as I mentioned I install Nagento for ecom systems

Are the application/database servers shared with other applications?
The server is a 4 cpu 6 core with 10GB memory, it is a dedicated server vmware server running on an enterprise class Host.
The DB server is a 4 cpu 4 core with 10GB memory. active monitoring shows the webserver to be lightly loaded runnng as a VMWare server on a different enterprse host.
The network is a 5GBe fibre connecting via high capacity managed switches to a 20GBe fibre backbone. I am confident that the issue is not in the network or infrastructure.

I separately run a development environment where the file storage is on NFS server which is mounted into the webserver, so this has already been tackled.

Thanks
exchangecore replied on at Permalink Reply
exchangecore
Nice. That is a good bit of traffic. Only other question I would have for single server environment is are you leveraging PHP-FPM or are you still using mod_php or suphp to serve your php requests through apache? The former is usually a good option to look at if you're not using it.

But to answer your question, I don't see any reason why you couldn't set up an NFS share and scale out with load balancers.

Another option that I know others have taken is to place a caching server (like varnish) in front of everything. It might be an option worth considering, given your server load.

https://github.com/concrete5/addon_varnish_cache...
FaganSystems replied on at Permalink Reply
FaganSystems
HI,
The setup is running mod-php, one of the threads being investigated is to switch to nginx/php-fpm with microcaching enabled as well. So are you saying that running a multi-head webserver configuration is a viable option, do you know of any information on the best practice in implementing this?

I will check out the varnish add on
Thanks
exchangecore replied on at Permalink Reply
exchangecore
Honestly, I think if you just tossed the whole site in an NFS share you'd be good to go. I don't know that anyone's come up with any best practices on this (and I'm not sure we're likely to see any come about with 5.6 being phased out).

From a technical standpoint though, you need to keep in sync all these files somehow. The files that are most likely to change are going to be in the /files/ directory, and the /files/tmp directory is where the sessions are going to live (if memory serves me correctly), so session sharing should be picked up there. Next up for changes would be the /packages and /updates and possibly /config directories since these change when packages and site updates are made. Generally, the other files are pretty static unless you have developers making changes to the site.

If you wanted to go bare bones you could probably do something like:
nfs shares for
/files
/packages
/updates
/config

and Rsync the rest of the files over. They're not that likely to change (and truthfully packages updates and config aren't going to change either unless someone's doing updates).

Otherwise if you just wanted to do a proof of concept, just set up an NFS share for the entire web root, and start serving from there and see what happens (in test of course). You could either do some "poor mans load balancing" with DNS or set up a load balancer in front. That part is pretty agnostic to concrete5 at that point though.