This is a rewrite of a package by Lepozepo where metadata {Cache_Control: ...., Expires: ....} has been added in order to control caching of S3 files.
Requries Meteor Session
meteor add session
npm i --save s3up-meta@git+https://github.com/paulincai/s3up-meta.git @aws-sdk/client-s3
Session is used to avail of the upload % without triggering a React component refresh. Other techniques may be used.
Add your AWS configuration to your settings:
In your settings.json :
{
"private": {
"s3": {
"key": "xxxxxx",
"secret": "xxxxxxxxxxxxxx",
"bucket": "xxxxxxxxxxxxxx",
"region": "eu-central-1"
}
}
}
Set up your authorizers functions. This could be in your startup/server. SERVER SIDE
import { Meteor } from 'meteor/meteor'
import { authorizer as Authorizer } from '@activitree/s3up-meta/server'
/**
* S3 image upload and delete methods. Although they are server side, the file upload is
* directly from client to S3. Delete happens only server side.
*/
let services = {
key: '',
secret: '',
bucket: 'missing',
region: 'eu-central-1'
}
if (process.env.AWS_ACCESS_KEY_ID && process.env.AWS_SECRET_ACCESS_KEY) {
services = {
key: process.env.AWS_ACCESS_KEY_ID,
secret: process.env.AWS_SECRET_ACCESS_KEY,
bucket: 'bucket_name', // perhaps get it via env vars as well
region: process.env.AWS_S3_REGION
}
}
const otherService = {
key: process.env.AWS_OTHER_SERVICE_KEY_ID,
secret: process.env.AWS_OTHER_SERVICE_SECRET_ACCESS_KEY,
bucket: 'other_bucket',
region: process.env.AWS_OTHER_SERVICE_S3_REGION || 'eu-central-1'
}
const authorizer = new Authorizer(services)
const authorizerOtherService = new Authorizer(otherService)
Meteor.methods({
authorize_upload: function (ops, metadata) {
check(ops, Object)
Match.test(metadata, Match.OneOf(Object, undefined, null))
this.unblock()
return authorizer.authorizeUpload(metadata, ops)
},
deleteServerSide: function (ops) { // object contains paths: { paths = [fileName, fileName] }
check(ops, Object)
this.unblock()
return authorizer.deleteServerSide(ops)
},
authorize_upload_other_service: function (ops, metadata) {
check(ops, Object)
Match.test(metadata, Match.OneOf(Object, undefined, null))
this.unblock()
return authorizerOtherService.authorizeUpload(metadata, ops)
},
deleteServerSideOtherService: function (ops) {
check(ops, Object)
this.unblock()
return authorizer.deleteServerSide(ops)
}
// etc ...
})
Sign the upload. Receive the signature from Meteor server and use it to upload from client directly to S3. CLIENT SIDE Example as a Redux action, please extrapolate to a method of your convenience.
Concept: before start of upload trigger the showing of a spinner. If, for instance, you replace an avatar image, call the upload of a new image and on success, call delete of the old avatar.
import { Meteor } from 'meteor/meteor'
import { uploadFile } from 's3up-meta/client'
import b64toBlob from '../../helpers/b64toBlob' // I use my own blob library
export const UPLOAD_IMAGE_AWS = 'UPLOAD_IMAGE_AWS' // this should return a image URL as payload
export const DELETE_IMAGE_AWS = 'DELETE_IMAGE_AWS'
export const SET_STATE_UPLOADER = 'SET_STATE_UPLOADER'
const setStateUploader = states => {
return {
type: SET_STATE_UPLOADER,
payload: states
}
}
const uploadImageAWS = (imageData, path, size) => {
if (!imageData || !path) {
return {
type: UPLOAD_IMAGE_AWS,
payload: null
}
}
return dispatch => {
dispatch(setStateUploader({ showUploadSpinner: true })) // from the action above
let blobData = imageData.slice(23)
const metadata = { CacheControl: 'max-age=8460000', Expires: 'Thu, 15 Dec 2050 04:08:00 GMT' } // this is a veeeeery long time
blobData = b64toBlob(blobData, 'image/jpeg')
path = path === 'post' ? 'postsProxy' : 'avatar' // just some conditions to send the file in S3 to one folder or another
uploadFile(blobData, {
authorizer: Meteor.call.bind(this, 'authorizeUpload', metadata), // authorization so I can write in S3
path, // to where I write in my bucket in S3
type: 'image/jpeg', // or something else...PNG, PDF etc
metadata, // see this constant above
upload_event: (err, res) => {
if (err) {
dispatch({
type: SET_STATE_UPLOADER,
payload: null
})
} else {
if (res.relative_url) {
// the rest below is irelevant, it can be whatever you need it to be. Just make use of res.relative_url...
let image = null
if (path === 'avatar' || path === 'covers') { image = res.relative_url.substring(8) } else if (path === 'postsProxy') { image = res.relative_url.substring(12) }
const payload = path === 'postsProxy' ? { postImage: image, size } : path === 'covers' ? { coverImage: image, size } : { avatarImage: image, size }
dispatch({
type: UPLOAD_IMAGE_AWS,
payload
})
}
}
}
})
}
}
const deleteImageAWS = (path, oldImage) => {
return dispatch => {
if (oldImage) {
Meteor.call('deleteServerSide', { path }, err => {
dispatch({
type: DELETE_IMAGE_AWS,
payload: err ? { err } : 'OK'
})
})
}
}
}
Notice how upload_files
require an authorizer
function to communicate with the server. In Meteor this is a Meteor.method
but you can use anything.
deleteFiles takes place server side. File keys (S3 paths) are being sent to the Meteor server for deletion. Does not require authorisation since this doesn't require the slingshot principle (uploading from client directly to S3)
For all of this to work you need to create an aws account.
- Navigate to your bucket
- On the top right side you'll see your account name. Click it and go to Security Credentials.
- Create a new access key under the Access Keys (Access Key ID and Secret Access Key) tab.
- Enter this information into your app as defined in "How to Use" "Step 1".
- Your region can be found under "Properties" button and "Static Website Hosting" tab.
- bucketName.s3-website-eu-west-1.amazonaws.com.
- If your region is "us-east-1" or "us-standard" then you don't need to specify this in the config.
- Upload a blank
index.html
file (anywhere is ok, I put it in root). - Select the bucket's properties by clicking on the bucket (from All Buckets) then the "Properties" button at the top right.
- Click "Static Website Hosting" tab.
- Click Enable Website Hosting.
- Fill the
Index Document
input with the path to yourindex.html
without a trailing slash. E.g.afolder/index.html
,index.html
- Click "Save"
You need to set permissions so that everyone can see what's in there.
-
Select the bucket's properties and go to the "Permissions" tab.
-
Click "Edit CORS Configuration" and paste this:
<?xml version="1.0" encoding="UTF-8"?> <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>PUT</AllowedMethod> <AllowedMethod>POST</AllowedMethod> <AllowedMethod>GET</AllowedMethod> <AllowedMethod>HEAD</AllowedMethod> <MaxAgeSeconds>3000</MaxAgeSeconds> <AllowedHeader>*</AllowedHeader> </CORSRule> </CORSConfiguration>
-
Click "Edit bucket policy" and paste this (Replace the bucket name with your own):
{ "Version": "2008-10-17", "Statement": [ { "Sid": "AllowPublicRead", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::YOURBUCKETNAMEHERE/*" } ] }
-
Click Save
It might take a couple of hours before you can actually start uploading to S3. Amazon takes some time to make things work.
Enjoy, this took me a long time to figure out and I'm sharing it so that nobody has to go through all that.
[TODO]
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/frames.html https://github.com/Differential/meteor-uploader/blob/master/lib/UploaderFile.coffee#L169-L178
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html http://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html https://github.com/CulturalMe/meteor-slingshot/blob/master/services/aws-s3.js