Recently, I had the opportunity to work on an exciting in-house project for Grio called Filedart. This service, which will launch in the near future, affords the denizens of the web the ability to effortlessly upload content to the cloud by dragging photos or files to a small icon in their taskbar. After the file is uploaded by the client, the service automatically copies a mini-fied URL to the client’s clipboard. This URL leads to a brand-new, public web hosting wrapper for that file that they can easily distribute to their friends of colleagues to share. The service is free, and users don’t even have to log-in to use it.
It was a very interesting problem to think about in the early stages, particularly because (beyond dabbling) I had not yet had the opportunity to work with Amazon S3, our preferred cloud hosting service. I was tasked with creating the initial Ruby on Rails web app that functioned both as a web-host for the file wrapper, and a web-service to communicate with the amazon S3 API.
While the Rails web-app itself was familiar territory for me, integrating the Rails-to-S3 communication was more of a mystery. Our unique demands called for temporary access credentials to be issued at upload for the client, and then again when users are directed to the web wrapper. After research and trial-and-error in the handy IRB ruby terminal, I found a solution to our problem. For the more-problematic uploading side, I created a unique and temporary federated user for each upload request. Then, I create very specific (and temporary) permissions for what this federated user can modify in our S3 bucket. I can then pass these credentials back to the client.
Interacting with S3 for downloading was much simpler. S3 comes with a built in api that creates a secure, temporary, public URL for a file located in one of your buckets. I simply had to call this service, and then design a file wrapper for each kind of file we might host. For example, images are actually loaded into the wrapper and re-sized so as not to display a huge image. This actually was somewhat interesting as well, as I had to work with a graphics library called Image Magick on the server to process incoming images, create a unique thumbnail of appropriate size, and then re-upload that thumbnail to the S3 bucket so it could be accessed later in the wrapper.
This was a very interesting project for me, and I’m happy with how the project shaped up. Now I feel much more comfortable with dealing with cloud computing!