Amazon AWS / S3 for file uploads

Permalink
We are wanting to switch to PHPFog.com for hosting, but they require that we upload image and other uploaded files to a cloud host like Amazon's AWS (which we want to do anyway). Are there plans to support that on Concrete5 or are we not able to use PHPfog with Concrete5?

 
mkly replied on at Permalink Reply
mkly
I would doubt it any time too soon. Files aren't decoupled enough for that to be a simple switchout/abstraction afaik.
jamonholmgren replied on at Permalink Reply
That's unfortunate. It really is the future of CMS's. With the recent spate of hacking, web developers really need to lock down their code and separate content from application files.
mkly replied on at Permalink Reply
mkly
@jamonhimgren Sorry about the bad info.

@andrew I would love a chance to work on this in my available time. This and the same with email.
jamonholmgren replied on at Permalink Reply
Agree about email! And db access... phpactiverecord.org... :)
andrew replied on at Permalink Reply
andrew
Awesome. When I have a moment I will gather up with Jon Hartman was working on, and write it up in the same way as I have with the login framework stuff. He's about 40-50% there...definitely on its way, although it was 5.4.2.2 specific at one point. Needs more UI love but it's coming.
NUL76 replied on at Permalink Reply
NUL76
Same here, love to help out on making this feature happen.
andrew replied on at Permalink Best Answer Reply
andrew
We'd love to. It is on our radar. Right now we have the concept of file storage locations, but they're really just paths on a file system. I'd like to see that get abstracted out to places like S3, etc...

Some work began on this but it has stalled. I'd like to write up a roadmap doc for it for interested parties at some point.
jamonholmgren replied on at Permalink Reply
Please do! We are half-considering other CMS's since we like PHPfog so much and can't use concrete5. But we also like concrete5....
mkly replied on at Permalink Reply
mkly
Indeed, I would love to be able to move concrete5 in that direction. Didn't think it was actually on the radar. Awesome.
Mnkras replied on at Permalink Reply
Mnkras
There is a major issue that I see right now, most addons (and in a lot of places in c5) the path to the file is returned, eg: /files/1234/5678/9012/somefile.jpg

That makes it very hard to control where files are being served from. If all the files were to go through the download_file single page or a tool, that would make this a lot easier.

Hope I explained this well enough.

Mike
jamonholmgren replied on at Permalink Reply
Is it possible to serve the file via a PHP file and a symlink?

http://php.net/manual/en/function.symlink.php...

Basically, have any requests to /files/x.ext redirect to /filemanager.php?x.ext.

Then, filemanager could serve the Amazon version of the file.

I'm just speculating here, so feel free to shoot holes in it.
jamonholmgren replied on at Permalink Reply
Or Apache redirection or reverse proxy?

http://httpd.apache.org/docs/2.0/urlmapping.html...

Anything useful there?
Mnkras replied on at Permalink Reply
Mnkras
Thats not something that will be universally supported, and its just plain hacky unfortunately.

Making symlinks can cause problems, especially if they are disabled on the server.
andrew replied on at Permalink Reply
andrew
If you're using the alternate storage location, all files are piped through download_file. We use this on concrete5.org for secure file downloading. In a more modular FileStorageLocation object, I imagine the object itself could determine whether to do this or not.
Mnkras replied on at Permalink Reply
Mnkras
I did not know that, playing with it now :)
jamonholmgren replied on at Permalink Reply
Did you get anywhere with this, mnkras?
Mnkras replied on at Permalink Reply
Mnkras
I have not done anything for this, I'm not very familiar with storage locations, and I have other things on my plate aswell so this is on the back burner.
Ricalsin replied on at Permalink Reply
Ricalsin
I am about to dig into using s3fs (https://code.google.com/p/s3fs/wiki/FuseOverAmazon) to keep c5 files persistent during AWS auto-scaling features which will wipe the /files directory clean if left in the root directory during a machine instance (ebs backed ec2) teardown and rebuild process - which is the way Amazon's auto-scaling works.

The s3fs (s3 file server) enables an s3 bucket that has been mounted to the machine instance and mapped to the files directory to actually serve the files. This according to Amazon's tech support. Though nobody can state whether it will be successful with Concrete5 as the only real experience has been with Drupal.

Any foreseeable roadblock ahead?