I'm deploying my first application on AWS and there are a couple of things I just cannot find a solution for.
1.File system
The application is using Lucene and allows image uploading, therefore I'm guessing I need an S3 instance to host the Lucene index and the images.
For testing purposes, on my local system I would place this line of code in Tomcat 7's server.xml:
<Context path="/uploads" docBase="D:/myapp/uploads" />. Now, as you probably know, all the requests starting with /uploads would be routed to D:/myapp/uploads by the server.
Furthermore, the Lucene API needs a absolute path in order to find the Index Directory:
FSDirectory.open(new File(ConfigUtil.getProperty("D:/myapp/index")))
My first question is about this configuration in the AWS Console. How can I obtain those `D:/aaa/bbb/' paths?
2.Emailing system
After registration, a confirmation email is sent to the user. Again, in testing I used Google's smtp.gmail.com. I would need a host a username and a password to make the javax.mail API work.
I have no idea how can I obtain those credentials? Is it a AWS matter or a Domain Registrar (I'm using namecheap) matter.
Thanks for your help!
To host the images on S3, you have two options.
Either allow upload to an EBS-backed EC2 instance first, as you did on your test system, and move them to S3 afterwards asynchronously.
In this case, you can choose any path you wish on the EBS volume to temporarily store the uploaded files.
Or modify your front-end to allow submission to S3 directly.
Likewise, you can choose any path you wish on the EBS volume to store Lucene's index.
Regarding the use of javax.mail
set smtp.gmail.com as host
create a gmail account
use the newly created account's username and password
Related
I know I can use Powershell to initiate and manage a BITS (Background Intelligent Transfer Service) download from my server over VPN, and I am looking to do that to stage large install resources locally while a regular user is logged on, ready for use in software installs and updates later. However, I would like to also support cloud services for the download repository, as I foresee (some) firms no longer having centralized servers and VPN connections, just cloud repositories and a distributed workforce. To that end I have tested using Copy-S3Object from the AWS Powershell tools, and that works. But it isn't throttle-able so far as I can tell. So I wonder, is there a way to configure my AWS bucket so that I can use BITS to do the download, but still constrained by AWS credentials?
And if there is, is the technique valid across multiple cloud services, such as Azure and Google Cloud? I would LIKE to be cloud platform agnostic if possible.
I have found this thread, that seems to suggest that creating presigned URLs would work. But my understanding of that process is, well, non existent. I am currently creating credentials for every user. Do I basically assign those users to an AWS group and give that group some permissions, and then Powershell can be used to sign a URL with the particular user's credentials, and that URL is what BITS uses? So a user who has been removed from the group would no longer be able to create signed URLs, and so would no longer be able to access the available resources?
Alternatively, if there is a way to throttle Copy-S3Object that would work too. But so far as I can tell that is not an option.
Not sure of a way to throttle the copy-s3 object but you can definitely BITS a pre-signed s3 URL.
For example, if you have your AWS group with users a/b/c in there, and the group has a policy attached that allows the relevant access to your bucket - those users a/b/c will be able to create pre-signed URLs for objects in that bucket. For example, the following create a pre-signed url for an object called 'BITS-test.txt':
aws s3 presign s3://youbucketnamehere/BITS-test.txt
That will generate a pre-signed URL that can be passed into an Invoke-WebRequest command.
This URL is not restricted to only those users though, anybody with this URL will be able to download the object - but only users a/b/c (or anyone else with access to that bucket) will be able to create these URLs. If you don't want users a/b/c to be able to create these URLs anymore, then you can just remove them from the AWS group like you mentioned.
You can also add an expiry param to the presign command for example --expires-in 60 which keeps the link valid for only that period of time (in this case for 1 hour - expiry param is set in minutes).
We are a small startup currently in prototype phase. We are still in development phase, and are using AWS to host our application and (test) domain. We have hosted our domain on Route 53, and registered that with SES for email services.
I am new to AWS, and have used domination to understand how to set these things up. Now it appears that our account(s) have been compromised/hacked and someone is missing it to send malicious emails. I am unsure what is the extend of hack, and if the users is only managed to get access to SES and Database credentials. I received an email from SES team, which shows emails have been send through my domain (not by me), but I never created that email on my domain.
Additionally, I have noticed that someone is trying to access my database (from China) and database is always at 100%. Database log says it has blocked IP (which is based in China).
We are using GitHub to store code, and in our code we had credentials for AWS and SMTB servers so I think its possible that someone stoke keys from there (we have taken credential out of GitHub now).
Can someone help me understand what steps do I need to take. I am thinking to shut down this environment and create a new one, but I am unsure how to get hold of my domain and shut down all emails created by spammer on my domain. I am also unclear what is the extend of hack, and if this will come back.
Cam someone please help.
You should never store your credentials in github.
In fact, you should use roles instead of credentials stored directly in the code.
So, step by step you should:
Remove the credentials from github and from your code (done)
Reset your credentials and do not store them
Create a role with the policy according to your needs
Assign that role to your resources.
Here you can found more info
I want to build an application using Amazon Web Services (AWS).
The way the application should work is this;
I make a program that lets the user import a large file in an external format and send it to AWS (S3?) in my own format.
Next many users can access the data from web and desktop applications.
I want to charge per user accessing the data.
The problem is that the data on AWS must be in an unintelligible format or the users may copy the data over to another AWS account where I can not charge them. In other words the user need to do some "decrypting" of the data before they can be used. On the web this must be done in JavaScript which is plaintext and would allow the users to figure out my unintelligible format.
How can I fix this problem?
Is there for instance a built in encryption/decryption mechanism?
Alternatively is there some easy way in AWS to make a server that decrypts the data using precompiled code that I upload to AWS?
In general when you don't want your users to access your application's raw data you just don't make that data public. You should build some sort of server-side process that reads the raw data and serves up what the user is requesting. You can store the data in a database or in files on S3 or wherever you want, just don't make it publicly accessible. Then you can require a user to login to your application in order to access the data.
You could host such a service on AWS using EC2 or Elastic Beanstalk or possibly Lambda. You could also possibly use API Gateway to manage access to the services you build.
Regarding your specific question about a service on AWS that will encrypt your public data and then decrypt it on the fly, there isn't anything that does that out of the box. You would have to build such a service and host it on Amazon, but I don't think that is the right way to go about this at all. Just don't make your data publicly accessible in the first place, and make all requests for data go through some service to verify that the user should be able to access the data. In your case that would mean verifying that the user has paid to access the data they are requesting.
I am developing app server with NodeJS and AWS.
I am setting up the server environment with ELB and EC2s.
I am using ELB as load balancer and attached several app server EC2 instances to it.
And one EC2 instance is used for MongoDB.
My question is about request including file upload.
I think uploaded file should not be in app server (EC2 instance), so I will try to save uploaded files in S3 and allow app servers (EC2 instances) to access it.
The rough solution is that if app servers accept a file from client, move it to S3 and delete the file on the app server.
But then it will cause some performance loss and I don't feel it's a clean way.
Is this a best way? or there is another way to solve it.
I think it's best way to upload file to S3.
But file is uploaded with other data. (For example, profile upload - name: String, age: Number, profileImage: File)
I need to process other data on app server, so client should not upload to S3 directly.
Is there any better idea?
Please save me.
P.S: Please let me know if you cannot understand my expression because I am not native. If so, I will add some explanation for it with my best!
You can directly upload to S3 using temporary credentials that allow the end user to write to you bucket.
There is a good article with detailed code for doing exactly what you are trying to do with node.js here.
Answers that refer to external links are frowned upon on SO, so in a nutshell:
include the aws sdk in your application
provide it with appropriate credentials
use those credentials to generate a signed URL with a short lifespan
provide the end user with the signed URL they can then use to upload, preferably asynchronously with progress feedback
I have data from multiple users inside a single S3 account. My desktop app has an authentication system which let the app know who the user is and which folder to access on S3. but the desktop app has the access code to the whole S3 folder.
somebody told me this is not secure since a hacker could break the request from the app to the S3 and use the credentials to download all the data.
Is this true? and if so how can I avoid it? (he said I need to a client server in the AWS cloud but this isn't clear to me... )
btw. I am using Boto python library to access S3.
thanks
I just found this:
Don't store your AWS secret key in the app. A determined hacker would be able to find it eventially. One idea is that you have a web service hosted somewhere whose sole purpose is to sign the client's S3 requests using the secret key, those requests are then relayed to the S3 service. Therefore you get your users to authenticate agaist your web service using credentials that you control. To re-iterate: the clients talk directly to S3, but get their requests "rubber-stamped"/approved by you.
I don't see S3 necessarily as a flat structure - if you use filesystem notation "folder/subfolder/file.ext" for the keys.
Vanity URLs are supported by S3 see http://docs.amazonwebservices.com/AmazonS3/2006-03-01/VirtualHosting.html - basically the URL "http://s3.amazonaws.com/mybucket/myfile.ext" becomes "http://mybucket.s3.amazonaws.com/myfile.ext" and you can then setup a CNAME in your DNS that maps "www.myname.com" to "mybucket.s3.amazonaws.com" which results in "http://www.myname.com/myfile.ext"
Perfect timing! AWS just announced a feature yesterday that's likely to help you here: Variables in IAM policies.
What you would do is create an IAM account for each of your users. This will allow you to have a separate access key and secret key for each user. Then you would assign a policy to your bucket that restricts access to a portion of the bucket, based on username. (The example that I linked to above has good example of this use case).