Tomcat: Cannot load the credentials from the credential profiles file of AWS - amazon-web-services

I'm implementing a REST Api which need download files from Amazon S3.
When I call the api, it gives me the exception:
com.amazonaws.AmazonClientException: Cannot load the credentials from the credential profiles file. Please make sure that your credentials file is at the correct location (~/.aws/credentials), and is in valid format.
Then I run the same method API called, it succeeded. I'm using tomcat 9.0.2, my guess is the default credential directory ~/.aws/credential is different for tomcat application.
So I checked $CATALINA_HOME, which is /opt/apache-tomcat-9.0.2 and I copied the credentials there, still not working.
First time asking questions here, I hope that I expressed my issue clearly. Thanks for any help from anybody.

The home directory ~ depends on the user running the Tomcat server, so if Tomcat is running as the root user, for example, you'd need to place the credentials in /root/.aws/credentials.
Depending on the Tomcat version, you can usually find the default user in /etc/default/tomcat7. It would appear as something like:
TOMCAT7_USER=tomcat7
In that case, the home directory would be /home/tomcat7, and thus the credentials would go in /home/tomcat7/.aws/credentials.

Related

Provide directory listings using API Gateway as a proxy to S3

I'm trying to host a static site through S3 and API Gateway (using S3 directly isn't an option due to security policy). One of the pages runs a client-side script to pull back a set of files from a specific folder on the server. I've set up the hosting following the Amazon tutorial.
For this to run, my script needs to be able to obtain the list of files for a specific folder.
If I was hosting the site on my own server using Apache, I could rely on the directory listing feature, where a GET on a folder with no index.html returns a file list. The tutorial suggests that this should be possible, but I can't seem to get it to work. If I submit a request to a particular {prefix}/{identifier}, I can retrieve the specific file, but sending a request to {prefix}/ returns an error.
Is there a way I can replicate directory listings so my Javascript can pull it down and read it, or is the only solution to write a server-side API in Lambda?

Configuring Google Cloud Load Balancer path rules

I'm trying to configure the Google Cloud loadbalancer to do the following:
I have a website running on a Wordpress machine in a VM instance which I want users to access when they enter outairnet.com.
And I have a separate html file that I want users to access when they access outairnet.com/map.
WP is running on a compute engine VM, connected to a VM instance and to a backend service. The seperate html file is on a service bucket, connected to a backend bucket.
I've triedd to configure a very simple path forwarding rule, which made sense to me. But it just adds up to anyone trying to access outairnet.com/* gets to the WP (which is fine)
but accessing outairnet.com/map doesn't point to the storage bucket with the html file, however accessing outairnet.com/index.html does point to the separate html file.
My LB config looks like this.
I think I'm on to the problem but still can't solve it.
it looks like google console adds a /* rule even when I try to delete it.
so its a /* path rule that catches everything despite having a more specific rule like /mypath/* in addition.
but after removing it is just readded automatically for some reason. why?
It's possible - there are a few steps involved such as creating a bucket with your static page, adding it as a backend service in your load balancer and creating a new path-rule in it to redirect the requests.
And now the details:
Create a new bucket - pick the name you like (outairnet-static or something that will be meaningful to you so you don't delete by accident). You can ignore all the tutorials telling that it has to have the exact name of your domain - since it will only be hosting a file accessible under outairnet.com/mylink/ it will work regardless of the name used. I tested it.
Create a directory in your bucket named exactly ax the path under which you want it to be. If you want outairnet.com/mylink/ then directory's name has to be mylink. Upload your files into that directory. Name your main index file index.html unless you want to provide full file path.
Make the bucket avaialble to everyone.
Go to your LB configuration and edit backend services; add a new backend bucket.
Go to your Host and Path Rules and configure a new path; Enter the name of your site and the path (Remember that the path value must be /mylink/*.) and select the bucket you've just created from the dropdown list.
No changes necessary for the frontend. Save the changes and in a few moments it should be working.
I just added another path rule with just "/" directing to the VM and it seemed to do it, but now the only glitch is www.outairnet.com/map is fine but outairnet.com/map without www directs to the vm and not the bucket

Uploading data to Amazon S3 directly from a URL [duplicate]

Is it possible to upload a file to S3 from a remote server?
The remote server is basically a URL based file server. Example, using http://example.com/1.jpg, it serves the image. It doesn't do anything else and can't run code on this server.
It is possible to have another server telling S3 to upload a file from http://example.com/1.jpg
upload from http://example.com/1.jpg
server -------------------------------------------> S3 <-----> example.com
If you can't run code on the server or execute requests then, no, you can't do this. You will have to download the file to a server or computer that you own and upload from there.
You can see the operations you can perform on amazon S3 at http://docs.amazonwebservices.com/AmazonS3/latest/API/APIRest.html
Checking the operations for both the REST and SOAP APIs you'll see there's no way to give Amazon S3 a remote URL and have it grab the object for you. All of the PUT requests require the object's data to be provided as part of the request. Meaning the server or computer that is initiating the web request needs to have the data.
I have had a similar problem in the past where I wanted to download my users' Facebook Thumbnails and upload them to S3 for use on my site. The way I did it was to download the image from Facebook into Memory on my server, then upload to Amazon S3 - the full thing took under 2 seconds. After the upload to S3 was complete, write the bucket/key to a database.
Unfortunately there's no other way to do it.
I think the suggestion provided is quite good, you can SCP the file to S3 Bucket. Giving the pem file will be a password less authentication, via PHP file you can validate the extensions. PHP file can pass the file, as argument to SCP command.
The only problem with this solution is, you must have your instance in AWS. You can't use this solution if your website is hosted in other Hosting Providers and you are trying to upload files straight to S3 Bucket.
Technically it's possible, using AWS Signature Version 4, Assuming your remote server is the customer in the image below, you could prepare a form in the main server, and send the form fields to the remote server, for it to curl it. Detailed example here.
you can use scp command from Terminal.
1)using terminal, go to the place where there is that file you want to transfer to the server
2) type this:
scp -i yourAmazonKeypairPath.pem fileNameThatYouWantToTransfer.php ec2-user#ec2-00-000-000-15.us-west-2.compute.amazonaws.com:
N.B. Add "ec2-user#" before your ec2blablbla stuffs that you got from the Ec2 website!! This is such a picky error!
3) your file will be uploaded and the progress will be shown. When it is 100%, you are done!

Informatica permission and account to write target file to another server folder

I need to specify that target file to be written to a specified folder in another server beside on the same box as Informatica.
I looked in the session property and do not see an entry to specify the account name and password?
The account name has the permission to the folder in the other server.
Any help is appreciated
Thanks
Beside using FTP it might be possible (depending on your environment) to map the required server path and use it directly.
You have to create a ftp connection string if you want to directly write to a file in another server.
Or else you can write a script which would ftp the file after it is generated on the informatica server.

Key length error logging into store on GREG 5.0 using SSO and custom Cert

We have been implementing GREG5.0 and using default configurations everything works fine. Once we replace the default localhost certificate in the wso2cabon.jks keystore with our own we receive "java.security.SignatureException: Signature length not correct: got 256 but was expecting 128" when we log into Store or Publisher using SSO.
We have removed the default keypair from wso2carbon.jks and added our own certificate. The password for our keystore and certificate are the same. We have updated all the configuration files per the wso2 carbon 4.4 documentation. We have updated JavaHome with local_policy.jar and us_export_policy.jar in order to allow for the longer key length.
The administrator console works great with no issues. If we change the login method of store or publisher to "basic" then it works fine. When we have the login method set to "SSO" we end up sitting on a blank page at this location https://servername/store/acs. We have the same result in the browser if we are running as a windows server or in console mode but, if we are running as a windows service then we have no error and no indication of what happened. If we are running in console mode then I get the error mentioned above spit out in the console.
I also noticed this behavior on Identity Server 5.0 when accessing dashboard.
We are running on windows.
Is there another location in WSO2 that I need to update to accomodate an increased key length?
Joe
The location I missed updating was the IdentityAlias in repository/deployment/server/jaggeryapps/store/config/store.json repository/deployment/server/jaggeryapps/publisher/config/publisher.json. Once I updated that value to match the alias of the keypair I was using in wso2carbon.jks that appeared as though it solved the keylength error and created another problem.
So now it was giving me a NullPointerException. I had provided the alias of our keypair but that was not the same as the alias for our certificate exported from our keypair that we loaded in client-truststore.jks. So I decided to set both alias' so they would match. With that change I was finally able to successfully able to access the store and publisher.
After some further testing it did not care what my keypair alias was as long as the value in IdentityAlias matched the alias of my certificate loaded in client-truststore.jks.
Hope this helps someone.
Joe