AWS - Centos7 - /home/.aws/credentials not working - amazon-web-services

I have a Centos7 VPS with AWS CLI installed on the /home directory. I've added my credentials into aws configure and it's generated the following files:
/home/.aws/credentials
/home/.aws/config
If I run the following code, it fails:
$client = new Aws\Lightsail\LightsailClient([
'region' => 'eu-west-2',
'version' => '2016-11-28'
]);
The error message is:
AccessDeniedException (client): User: arn:aws:sts::523423432423:assumed-role/AmazonLightsailInstanceRole/i-0eb5b2155b08e5185 is not authorized to perform
However if I add my credentials like so it works:
$credentials = new Aws\Credentials\Credentials('key', 'secret');
$client = new Aws\Lightsail\LightsailClient([
'region' => 'eu-west-2',
'version' => '2016-11-28',
'credentials' => $credentials
]);
Do I need to do something extra in order to get my script to read the /home/.aws/credentials file?

Do I need to do something extra in order to get my script to read the /home/.aws/credentials file?
Yes, you need to put the .aws/credentials directory in the home directory of the user running the command. This will be something like /home/username instead meaning that the full path to the credentials will be /home/username/.aws/credentials. It does not matter where you installed the aws command to.

Related

AWS S3 presign url, check file exist

I use AWS php SDK.
How can I check if file exist using presign request commands?
Currently I use "GetObject" command but I do not need it download file. I only need check if file exist.
$cmd = $s3->getCommand('GetObject', [
'Bucket' => 's3.test.bucket',
'Key' => $fileKey
]);
$request = $s3->createPresignedRequest($cmd, '+60 minutes')->withMethod('GET');
return (string)$request->getUri();
Is there any command to achieve it?
Thank you.
I found solution. The proper command is HeadObject and method is HEAD.
Return 200 or 404.

How do I list AWS S3 files for a bucket in us-east-2?

I'm fully aware that S3 is region agnostic and that it shouldn't matter that the rest of our system is in us-east-2, but...
If I try to initialize without a signature or regionm it tells me that I'm now required to use the v4 signature:
php > $s3Client = \Aws\S3\S3Client::factory(array('key' => 'ACCESS', 'secret' => 'SECRET', 'version' => '2006-03-01'));
php > $objects = $s3Client->getListObjectsIterator(array('Bucket'=>'my-bucket')); foreach ($objects as $object) { echo $object['Key'] . "\n"; };
Warning: Uncaught Aws\S3\Exception\InvalidRequestException: AWS Error Code: InvalidRequest, Status Code: 400, AWS Request ID: REQUEST, AWS Error Type: client, AWS Error Message: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256., User-Agent: aws-sdk-php2/2.7.0 Guzzle/3.9.2 curl/7.54.0 PHP/7.1.23 ITR
thrown in phar:///Library/WebServer/lib/aws.phar/Aws/Common/Exception/NamespaceExceptionFactory.php on line 91
I try to initialize S3 without a region:
php > $s3Client = \Aws\S3\S3Client::factory(array('key' => 'ACCESS', 'secret' => 'SECRET', 'signature' => 'v4'));
Warning: Uncaught Aws\Common\Exception\InvalidArgumentException: A region must be specified when using signature version 4 in phar:///Library/WebServer/aws.phar/Aws/S3/S3Client.php:283
Stack trace:
#0 phar:///Library/WebServer/aws.phar/Aws/S3/S3Client.php(171): Aws\S3\S3Client::createSignature(Array)
#1 php shell code(1): Aws\S3\S3Client::factory(Array)
#2 {main}
thrown in phar:///Library/WebServer/aws.phar/Aws/S3/S3Client.php on line 283
Alright that makes sense, I guess I have to supply a region even though S3 doesn't require a region...
php > $s3Client = \Aws\S3\S3Client::factory(array('key' => 'ACCESS', 'secret' => 'SECRET', 'signature' => 'v4', 'region' => 'us-east-1'));
php > $objects = $s3Client->getListObjectsIterator(array('Bucket'=>'my-bucket', 'Region' => 'us-east-1')); foreach ($objects as $object) { echo $object['Key'] . "\n"; };
Warning: Uncaught Aws\S3\Exception\S3Exception: AWS Error Code: AuthorizationHeaderMalformed, Status Code: 400, AWS Request ID: REQUEST, AWS Error Type: client, AWS Error Message: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-east-2', User-Agent: aws-sdk-php2/2.6.15 Guzzle/3.9.2 curl/7.54.0 PHP/7.1.23 ITR
thrown in phar:///Library/WebServer/aws.phar/Aws/Common/Exception/NamespaceExceptionFactory.php on line 91
Oh... oh... ok. I guess I need to use us-east-2 since the rest of our servers and services are built on us-east-2...
php > $s3Client = \Aws\S3\S3Client::factory(array('key' => 'ACCESS', 'secret' => 'SECRET', 'signature' => 'v4', 'region' => 'us-east-2'));
Warning: Uncaught Aws\Common\Exception\InvalidArgumentException: us-east-2 is not a valid region for Amazon Simple Storage Service in phar:///Library/WebServer/aws.phar/Aws/Common/Client/AbstractClient.php:131
Stack trace:
#0 phar:///Library/WebServer/aws.phar/Aws/Common/Client/ClientBuilder.php(394): Aws\Common\Client\AbstractClient::getEndpoint(Object(Guzzle\Service\Description\ServiceDescription), 'us-east-2', 'https')
#1 phar:///Library/WebServer/aws.phar/Aws/Common/Client/ClientBuilder.php(204): Aws\Common\Client\ClientBuilder->updateConfigFromDescription(Object(Guzzle\Common\Collection))
#2 phar:///Library/WebServer/aws.phar/Aws/S3/S3Client.php(207): Aws\Common\Client\ClientBuilder->build()
#3 php shell code(1): Aws\S3\S3Client::factory(Array)
#4 {main}
thrown in phar:///Library/WebServer/aws.phar/Aws/Common/Client/AbstractClient.php on line 131
Then why would you tell me to use us-east-2 AWS?!?
My aws.phar is version 2.6.15.
This was a known issue with AWS PHP SDK in 2.x.x versions.
You will need to upgrade the SDK to 2.8.x and above.

Rails Sitemap Generator Uploading to S3

Trying to generate a sitemap and uploading it to my current existing bucket in Amazon's S3, however, I'm getting
Excon::Errors::Forbidden: Expected(200) <=> Actual(403 Forbidden)
This is my sitemap.rb file
SitemapGenerator::Sitemap.default_host = "http://www.example.com"
SitemapGenerator::Sitemap.public_path = 'tmp/sitemaps/'
SitemapGenerator::Sitemap.sitemaps_host = "http://s3.amazonaws.com/#{ENV['S3_BUCKET_NAME']}/"
SitemapGenerator::Sitemap.create do
add about_path
add landing_index_path
add new_user_session_path, priority: 0.0
Trip.find_each do |trip|
add trip_path(trip.slug), lastmod: trip.updated_at
end
end
I have this in my s3.rb file
CarrierWave.configure do |config|
config.storage = :fog
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => Rails::AWS.config['access_key_id'],
:aws_secret_access_key => Rails::AWS.config['secret_access_key'],
:region => 'us-east-1'
}
config.fog_directory = Rails::AWS.config['bucket_name']
end
Would someone be able to know what the issue is with this?
My working config (which I use in heroku) is a little different than yours, here is what I have:
SitemapGenerator::Sitemap.default_host = 'http://example.com'
SitemapGenerator::Sitemap.public_path = 'tmp/'
SitemapGenerator::Sitemap.adapter = SitemapGenerator::S3Adapter.new(fog_provider: 'AWS', fog_directory: 'sitemap-bucket')
SitemapGenerator::Sitemap.sitemaps_host = "http://#{ENV['FOG_DIRECTORY']}.s3.amazonaws.com/"
SitemapGenerator::Sitemap.sitemaps_path = 'sitemaps/'
I don't use a S3.rb, instead, I set the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
FOG_DIRECTORY
FOG_REGION
I used the tutorial in here: https://github.com/kjvarga/sitemap_generator/wiki/Generate-Sitemaps-on-read-only-filesystems-like-Heroku
I hope it helps!
I was experiencing a similar error:
In '/app/tmp/':
rake aborted!
ArgumentError: is not a recognized provider
Going off the help of renatolond's answer above, this is the configuration that worked for me. The key is to make sure that all of your variables, such as "fog_region:" actually match up to valid values. Do not blindly copy + paste configuration credentials.
SitemapGenerator::Sitemap.default_host = "https://yourwebsitename.com"
SitemapGenerator::Sitemap.public_path = 'tmp/'
SitemapGenerator::Sitemap.adapter = SitemapGenerator::S3Adapter.new(
fog_provider: 'AWS',
aws_access_key_id: ENV['AWS_ACCESS_KEY_ID'],
aws_secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
fog_directory: ENV['S3_BUCKET'],
fog_region: ENV['AWS_REGION'])
SitemapGenerator::Sitemap.sitemaps_host = "http://{ENV['S3_BUCKET']}.s3.amazonaws.com/"
SitemapGenerator::Sitemap.sitemaps_path = 'sitemaps/'

Configuring Blackfire on a base virtual box using Chef

I'm trying to give Blackfire.io (by Sensiolabs) a try to profile an existing PHP application running on a Vagrant machine (with PHP 5.3) on Mac.
I'm using Chef to provision my machine with Blackfire, but when running "vagrant provision" I get the following error:
default: STDERR: The server ID parameter is not set. Please run
blackfire-agent -register to configure it.
..which I already did
This is my Vagrant file:
is_windows = (RbConfig::CONFIG['host_os'] =~ /mswin|mingw|cygwin/)
Vagrant.configure("2") do |config|
..
config.vm.box = "covex/ubuntu1204-x64"
config.omnibus.chef_version = :latest
config.vm.provision "chef_solo" do |chef|
chef.json = {
:blackfire => {
:'server-id' => "d4860b49-be67-404b-9fa1-b..",
:'server-token' => "c412751f30d6c724033d8408e.."
}
}
chef.add_recipe "blackfire"
end
end
I followed the installation steps on https://blackfire.io/getting-started, except for the Probe paragraph.
Is my Vagrant file wrongly configured, so it can't read the server ID and token? Is the "brew install blackfire-php53" needed for this, if so, is there a way to configure this through my Vagrant file?
Guessing you are using https://supermarket.chef.io/cookbooks/blackfire
You missed the agent node in the config tree
{
"blackfire" => {
"agent" => {
"server-id" => "your server-id",
"server-token" => "your server-token",
}
}
}

Doctrine Orm Module Pdo doesnt get correct parameters for database when accessing website or console after Aws Beanstalk deploy

I am working through deploying a Zf2 app with Doctrine on Aws Beanstalk. Composer is running and setting up everything and afterwards I run a symfony console command to generate the databases and schemas using the parameters specified in my config. I followed the examples out of here to get the rds params for the configuration.
http://www.michaelgallego.fr/blog/2013/05/24/how-to-deploy-safe-zf-2-applications-on-amazon-elastic-beanstalk/
All this works up to accessing the website. At which point it gives me a Pdo error accessing the database.
Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY000] [2002]
No such file or directory' in
/var/app/current/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOConnection.php:43 Stack
trace: #0
/var/app/current/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOConnection.php(43):
PDO->__construct('mysql:host=loca...', 'username', 'password', Array) #
/var/app/current/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOMySql/Driver.php(45):
Doctrine\DBAL\Driver\PDOConnection->__construct('mysql:host=loca...', 'username',
'password', Array) #2
/var/app/current/vendor/doctrine/dbal/lib/Doctrine/DBAL/Connection.php(360):
Doctrine\DBAL\Driver\PDOMySql\Driver->connect(Array, 'username', 'password', Array) #3
/var/app/current/vendor/doctrine/dbal/lib/Doctrine/DBAL/Connection.php(429):
Doctrine\DBAL\Connection->connect() #4
/var/app/current/vendor/doctrine/dbal/lib/Doctrine/DBAL/Connection.php(389):
Doctrine\DBAL\Connection->getDatabasePlatformVersion() #5
/var/app/current/vendor/doctrine/dbal/lib/Doctrine/DBAL/C in
The zf2 app is apigility. After deployment if I log into the server and run the php command
php public/index.php development enable
It will produce the same error as the frontend website.
PHP Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY000] [2002]
No such file or directory' in
/var/app/current/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOConnection.php:43
I am setting up the database schemas through console commands that are in the aws config. This does setup the correct database and runs the schema and data import.
.ebextensions/composer.config
container_commands:
01installDev:
command: "/usr/bin/composer.phar install --dev"
02makeDatabase:
command: "/usr/bin/php /var/app/ondeck/cwg.php database:create"
03createDatabase:
command: "/usr/bin/php /var/app/ondeck/cwg.php database:fresh"
config/autoload/database.global.php
return array(
'doctrine' => array(
'connection' => array(
'orm_default' => array(
'driverClass' => 'Doctrine\\DBAL\\Driver\\PDOMySql\\Driver',
'params' => array(
'host' => $_SERVER['RDS_HOSTNAME'],
'port' => $_SERVER['RDS_PORT'],
'user' => $_SERVER['RDS_USERNAME'],
'password' => $_SERVER['RDS_PASSWORD'],
'dbname' => 'api_default',
),
),
),
),
);
When I run the console commands I am just creating a doctrine instance with the parameters in these files, that is the only difference from the zf2 app, but I do get the correct params.
Trying to debug this I dumped the $options and $pdo variables inside a doctrine orm module factory.
vendor/doctrine/doctrine-orm-module/src/DoctrineORMModule/Service/DBALConnectionFactory.php
var_dump($options);die();
object(DoctrineORMModule\Options\DBALConnection)#369 (8) { ["configuration":protected]=>
string(11) "orm_default" ["eventmanager":protected]=> string(11) "orm_default
["pdo":protected]=> NULL ["driverClass":protected]=> string(36)
"Doctrine\DBAL\Driver\PDOMySql\Driver" ["wrapperClass":protected]=> NULL
["params":protected]=> array(5) {
["host"]=> string(9) "localhost"
["port"]=> string(4) "3306"
["user"]=> string(8) "username"
["password"]=> string(8) "password"
["dbname"]=>string(8) "database"
} ["doctrineTypeMappings":protected]=> array(0) {
["__strictMode__":protected]=> bool(true) }
var_dump($pdo);die();
NULL
The parameters that are being used are the default doctrine parameters. I dont know if this is to early in the process if zf2 hasn't loaded the config files but I haven't found any other places related to the connection that will var_dump before the error. What am I doing wrong either in my aws configuration setup, or firewall rules between the rds servers and the api (which I assumed beanstalk configured for you since it set everything up), or not doing my zf2 configuration correctly? Any help is appreciated.
edit
I moved the var_dump further down to line 60
var_dump($connection->getParams());die();
array(8) { ["driverClass"]=> string(36) "Doctrine\DBAL\Driver\PDOMySql\Driver
["wrapperClass"]=> NULL ["pdo"]=> NULL ["host"]=> string(9) "localhost" ["port"]=>
string(4) "3306" ["user"]=> string(8) "username" ["password"]=> string(8) "password
["dbname"]=> string(8) "database" }
I think this pretty well shows that Doctrine does not see my configuration at all and somehow I am doing something or not doing something.
Somehow inside the application.config.php file a static location was set for configuration.
'config_glob_paths' => array(
'/home/myhome/api/config/autoload/{,*.}{global,local}.php'
),
Once I changed that to
'config_glob_paths' => array(
'config/autoload/{,*.}{global,local}.php'
),
Doctrine started getting the correct params.