I have a simple DynamoDB on AWS with a table which I can view test data 'Items' from the DynamoDB console also via a standalone client called Dynobase.
Now I want to create a simple webpage hosted on Lightsail that contains a HTML table to display the data.
I would like to connect to the DynamoDB using PHP then issue a query, tabulating the response.
Can someone point me at an example of how to do this? - The AWS documentation is quite confusing.
This is the code Link to the code I am running on my Lightsail instance. I have added <?php at the top of the file and ?> at the bottom. I am testing the code via my web browser xx.xx.xx.xx/MoviesCreateTable.php
This is the error I am getting->
Unable to create table: Error executing "CreateTable" on "http://localhost:8000"; AWS HTTP error: cURL error 7: Failed to connect to localhost port 8000: Connection refused (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for http://localhost:8000
Many Thanks
Andy
It turned out that the tutorial here ([Link to the code]) was very useful.
The main changes I made to enable the examples to work on my lightsail instance were to remove the endpoint line from the credentials.
Then I created a new Lightsail user and attached a policy o that user that had lightsail access and DynamoDB access.
Next, using the aws cli on the lightsail box I configured it for the new lightsail user.
This worked for me.
Related
We have an AWS Org with AWS Grafana running in the root account setup with Org access.
We have successfully connected to AWS Prometheus and other data sources across different organization accounts. But cant get AWS Grafana to connect to Amazon OpenSearch that is hosted in a VPC.
If you look at Grafana -> AWS Data Sources -> Amazon OpenSearch Service, it lists the cluster. But all attempts to connect have failed.
We have tried setting:
Using SigV4auth Auth
Using Basic auth + With Credentials (Even adding VPC connections between accounts and checking ports are open
When we try Save and Test, we always get a Testing.. followed by OpenSearch error: Bad Gateway in grafana.
Has anyone got it working successfully and able to assist?
Same issue here. Except the Grafana is setup in the same account that the opensearch cluster.
Also tried to configure the security group on the open search cluster to accept everything (all port, all protocol from anywhere).
I'm wondering if it's a network issue : the opensearch cluster being in a VPC can grafana access it ? But I can't find documentation on the network part of the managed grafana.
Hope someone will help.
Been told it’s a known issue.
The solution is to create a proxy for your opensearch cluster and let it get internet access to connect to grafana.
No idea on timelines for AWS to build / fix the problem :(
A solution that works well on my side is to fill in the fields:
HTTP part:
URL: https://search-anything
Access: Server (default)
Auth part:
Check Basic auth
then in Basic Auth Details fill in the master username and password
OpenSearch details part:
fill in the name of an index
make sure that a timestamp field exists in the index filled above and put the name of this field in Time field name
choose the right OpenSearch version 1.0.x
Test
I hope this will help you
I have some trouble concerning the RDS / Managed AD connection:
I've set up the AWS Managed Microsoft AD and added some users.
Then, I've set up an MS-SQL Database in RDS.
Now, while accessing it via SQL Server Management Studio works flawlessly I simply cannot add the AD users I've created.
I get the following error: The program cannot open the required dialog box because it cannot determine whether the computer named "Network Name Resource" is joined to a domain
Looking at the AD, I can see that the RDS instance is indeed missing.
How can that be? In the RDS console I can it clearly being attached to the Domain?
Have searched this issue for quite some time and hope someone can help me out here...
You must be signed into SSMS via domain account with privileges to add/modify users' logins for that search box to work.
Furthermore, it is non-obvious but you can confirm that your RDS instance is in the domain by using ADAC or ADUC and looking under: AWS Reserved > RDS
So today i wanted to set up an integration Server.
We are building a PHP Application using Laravel 5.5 and want to host it on AWS.
We have also registered to Laravel Forge and Laravel Envoyer.
So for the start i wanted to connect my Laravel Forge account to Amazon.
I signed into my amazon account, activated everything and created a new IAM User with AdministratorAccess Permission. I've saved everything and created the AWS secret and key. It is shown with status Active in the console.
Ok I headed over to Laravel Forge and went to Server Providers. I selected Amazon. in Profile Name i've entered the name of the user plus his key and secret. I thought i'd be done but i am getting this error:
Whoops! There were some problems with your input.
Invalid API credentials.
Anyone know how i can connect my forge with AWS or can point me to what i did wrong? Am I missing something?
Having the same issue. I seem to be able to create servers in the US regions but nowhere else. Same error as the above. The JS console shows 500 server errors when selecting any other region. Hoping someone has found a solution to this.
I contacted Laravel Forge support and the only advise I got was about contacting AWS directly. Is quite frustrating.
This issue was happening with me as the AWS account wasn't properly activated. Please follow up with AWS in this regards!
I'm evaluating AWS Data Migration Services. I'm attempting to move data from an Azure SQL database to a SQL Server 2016 database sitting on AWS RDS. I've successfully created the source and was able to connect when I clicked the Run Test button. However, when I entered the Target database connection details information, I'm not able to connect when I click the Run Test button. The information and error message is below.
I am able to connect to this instance using SQL Server Management Studio, with the credentials I'm using in the screen shot.
For timeout concerns, security groups are usually the culprit. Can you verify if the security group of your Target RDS instance allows ingress from the security group that the DMS Replication Instance belongs to?
See the attached screenshot:
See this article for more information: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.Network.html
This is not a duplicate of the question "Getting my AWS credentials using an API call" because I am asking specifically about what Amazon means in the example that they give.
I am looking here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
I see this bit:
Warning
If you use services that use instance metadata with IAM roles, ensure that you don't expose your credentials when the services make HTTP calls on your behalf. The types of services that could expose your credentials include HTTP proxies, HTML/CSS validator services, and XML processors that support XML inclusion.
The following command retrieves the security credentials for an IAM role named s3access.
$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access
Where does this IP address come from? What is 169.254.169.254? It can't be my server, since I don't have software running on port 80, nor would I grant Amazon an alias on my server.
But I did actually run the above, and it simply timed out. So the IP address 169.254.169.254 is not a service that Amazon is actively running. So what is it?
Does anyone understand this example that Amazon offers?
169.254 is within the link-local address space: https://en.wikipedia.org/wiki/Link-local_address
It's usually used for a lot of localhost/local-subnet use cases. Amazon happens to put their metadata service at 169.254.169.254 so that it can be queried from EC2 Instances.
curl http://169.254.169.254/latest/meta-data
Should always return something, the full http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access will only return something if you had an IAM role attached to your instance named s3access.
169.254.169.254 is the address of the AWS metadata service. You can query this address from an EC2 server to obtain information about the server. The metadata that can be obtained in this manner is documented here.
Are you saying that when you run that curl command from an EC2 server it is timing out?