Signature expired: is now earlier than error : InvalidSignatureException - amazon-web-services

I am trying a small example with AWS API Gateway and IAM authorization. The AWS API Gateway generated the below Endpoint :
https://xyz1234.execute-api.us-east-2.amazonaws.com/Users/users
with POST action and no parameters.
Initially I had turned off the IAM for this POST Method and I verified results using Postman it works.
Then I created a new IAM User and attached AmazonAPIGatewayInvokeFullAccess Policy to the user thereby giving permission to invoke any API's. Enabled the IAM for the POST Method.
I then went to Postman - and added Authorization with AccessKey, Secret Key, AWS Region as us-east-2 and Service Name as execute-api and tried to execute the Request but I got InvalidSignatureException Error with 403 as return code.
The body contains following message :
Signature expired: 20170517T062414Z is now earlier than 20170517T062840Z (20170517T063340Z - 5 min.)"
What am I missing ?

A request signed with AWS sigV4 includes a timestamp for when the signature was created. Signatures are only valid for a short amount of time after they are created. (This limits the amount of time that a replay attack can be attempted.)
When the signature is validated the timestamp is compared to the current time. If this indicates that the signature was not created recently, then signature validation fails with the error message you mentioned.
If you get this on in a Docker container on Windows that uses WSL, then it may help to fix the WSL time with by running wsl -d docker-desktop -e /sbin/hwclock -s in a Powershell. You can verify this is the case beforehand by logging into the container and
typing date in the terminal and comparing it with your host machine time.
A common cause of this is when the local clock on the host generating the signature is off by more than a couple of minutes.

You need to synchronize your machines local clock with NTP.
for eg. on an ubuntu machine:
sudo ntpdate pool.ntp.org
System time goes out of sync quite often. You need to keep them in sync periodically.
You can run a daily CRON job to keep your system time in sync as mentioned at this link: Periodically synchronize time in Linux
Create a bash script to sync time called ntpdate and put the below
into it
#!/bin/sh
# sync server time
/usr/sbin/ntpdate pool.ntp.org >> /tmp/ntpdate.log
You can place this script anywhere you like and then set up a cron I
will be putting it into the daily cron directory so that it runs once
every day So my ntpdate script is now in /etc/cron.daily/ntpdate and
it will run every day
Make this script executable
chmod +x /etc/cron.daily/ntpdate
Test it by running the script once and look for some output in
/tmp/ntpdate.log
/etc/cron.daily/ntpdate
In your log file you should see something like
26 Aug 12:19:06 ntpdate[2191]: adjust time server 206.108.0.131 offset 0.272120 sec

Faced similar issue when I use timedatectl command to change datetime of underlying machine... Explanation given by MikeD & others are really informative to fix the issue....
sudo apt install ntp
sudo apt install ntpdate
sudo ntpdate ntp.ubuntu.com
After synchronizing time with correct current datetime, this issue will be resolved

For me, the issue happened while using WSL. The date in WSL was out of sync.
The solution was to run the command
wsl --shutdown and restart docker.

This one command did the trick
sudo ntpdate pool.ntp.org

Make sure your PC's clock is set correctly. I faced the same issue and then realized my clock wasn't showing the right time due to some reason. As soon as I corrected the time, it started working fine again! Hope this helped.

I was also facing this issue , added
correctClockSkew: true
and issue fixed for me
const nodemailer = require('nodemailer');
const ses = require('nodemailer-ses-transport');
let transporter = nodemailer.createTransport(ses({
correctClockSkew: true,
accessKeyId: **,
secretAccessKey: **,
region: **
}));

If you are in AWS Ec2 Ubuntu server and somehow not able to fix time with NTP thing.
sudo date -s "$(wget -qSO- --max-redirect=0 google.com 2>&1 | grep Date: | cut -d' ' -f5-8)Z"
Source:https://askubuntu.com/a/655528

Had this problem on Windows. The current time got out of sync after a power outage. Solved it by: Setting -> date and time -> Sync now.

For those who face this issue while running Lambda functions (that use other AWS services like DynamoDB) locally with sam local invoke:
The time in docker container, used by sam, may not be in sync with host. Restarting your docker on host (Docker Desktop on Windows) should resolve this issue.

I was making AWS API requests from a VM on my local machine. I checked the date was correct and was syncing, but I was still getting the error above. I halted and re-upped my VM and the error went away. I never figured out the exact cause, but "turning it off and back on again" fixed it.

Complementing what as #miked-at-aws post about AWS sigV4, There are at least 2 main possible root causes for the clock skew:
your CPU is overloaded (reaching 99% usage or in EC2 instances with CPU limits that run out on CPU credits).
Why would this generate the time skew? because when the amazon SDK creates the time stamp to the moment the request is sent, normally there shouldn't be more than just a few nano or micro seconds, but if your CPU is overwhelmed it may take it several seconds or even minutes in some cases to process, so for this root cause you will experience not a 100% events lost but just some x% that may not be too big.
for the second root cause which is that your machine clock isn't just adjusted, well probably 100% of your events are being lost and you just have to make sure that your machine clock is being set and adjusted correctly.

I have tried all the solution related to time sync, but nothing works out. What I did was, while creating a service client, I set the correctClockSkew option as true. This solved my problem.
For instance:
let dynamodb = new AWS.DynamoDB({correctClockSkew: true});
Hope this will sort out.
Reference: https://github.com/aws/aws-sdk-js/issues/527

I have face this same problem while fetching video from Amazon Kinesis to my local website. So, in order to solve this problem i have install crony in my computer.This crony solved my problem.You can see the Amazon crony installation in this following link.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html

What worked for me was to change the time on my computer. I am in the UK so I put it forward one hour to put on a European time zone. Then it worked. This is not the best fix but it worked for me to move forward.
I set the timezone to eu-west-2 which is London so I am not sure why it only worked when I put the time on my computer forward an hour. I need to look into that.

Just try to update the system date and time they might be outdated synchronize your clock, and reload your console. This worked for me.

This is a question asking for any recent updates/suggestions.
Is this problem can be solved using aws amplifier, aws cognito SDK, service worker?
Time synchronization is not working

Related

AWS CLI in WSL2: "RequestTimeTooSkewed"

I execute the command: aws s3 ls and got the following error message:
An error occurred (RequestTimeTooSkewed) when calling the ListBuckets operation: The difference between the request time and the current time is too large.
Please advise.
If you're using WSL, you can run wsl --shutdown in CMD or PowerShell. This ensures the next time you start a WSL session, it cold boots and fixes the time.
https://github.com/microsoft/WSL/issues/4245
AWS API requests are 'signed' and part of the information exchanged is a timestamp. If the timestamp is more than 900 seconds old the request will be rejected.
This is done to prevent "replay attacks" where old requests are sent again.
You can fix this by correcting the Date and Time on the system where you are sending the request.

npm:youtube-dl and Lamda HTTP Error 429: Too Many Requests

I am running an npm package: youtube-dl through a Lambda function as I want to create an online convertor.
I have suddenly started to run into the following error message:
{
"errorMessage": "Command failed: /var/task/node_modules/youtube-dl/bin/youtube-dl --dump-json --format=best[ext=mp4] https://www.youtube.com/watch?v=MfTbHITdhEI\nERROR: Unable to download webpage: HTTP Error 429: Too Many Requests (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n",
"errorType": "Error",
"stackTrace": ["ERROR: Unable to download webpage: HTTP Error 429: Too Many Requests (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.", "", "ChildProcess.exithandler (child_process.js:275:12)", "emitTwo (events.js:126:13)", "ChildProcess.emit (events.js:214:7)", "maybeClose (internal/child_process.js:925:16)", "Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)"]
}
Edit: I have run this a few times when I was testing the other day, but today I only ran it once.
I think that the IP address used by my Lambda function has now been blacklisted. I'm unsure how to proceed as I am a junior and very new to all this.
Is there a way to resolve this? Can I get a new IP address? Is this going to be super costly?
youtube-dl lack of delay (limit of request per time) option.
(see suggestion it the bottom of my post).
NEVER download more than one video with youtube-dl.
You can search youtube-dl author contact (e-mail etc) and write them directly, also as open issue on github page regarding it. as more request they have as fast they be pleased to fix it.
Currenty they have planty same request on this issue in gitlab but they hardly to block discussions and close tickets by this problem.
This is some sort of misbehaviour I believe.
I also found that developer suggest to use proxy instead of introducing delay option in his code - extremely funny.
OK, re to use proxy - but this actually does not solve the problem since it is lack of program design and no matter you use proxy or not YouTube limits is still here.
Please note:
This cause not only subj error but blocking your IP by YouTube.
Once you hit this situation YouTube will block your IP as a suspicious again and again even with a small requests amount. this cause tremendous problems since IP marked as suspicious.
Without limiting request per time option (with safe value by default) I consider youtube-dl as a dangerous software should cause problems and I stopped using it until this option will be introduced.
RECOMENDATIONS:
Use Ctrl+S (suspend) , Ctrl+Q (resume) when youtube-dl collecting digest for many videos (when you already donloaded many videos of channel but new one still there). I suspend it for a few minutes after eatch 10.
And use --limit-rate 150K (or as low as it sane), this may help you to not hit the limit since whole transmission is shaped.
Ok, so I found this response: https://stackoverflow.com/a/45339683/9793169
I am wondering if it's possible that our because our volume is low we just always end up using the same container hence the same IP address?
Yes, that is exactly the reason. A container is only spawned if no containers are already available. After a few minutes of no further demand, excess/unneeded containers are destroyed.
If so is there any way to prevent this?
No, this behavior is by design.
SOLUTION:
I logged out for 20 minutes and went back to the function and ran it again. It worked
Not my solution, it took me a while to understand what he ment (reading is an art). It worked for me.
(see: https://askubuntu.com/questions/1220266/youtube-dl-do-not-working-http-error-429-too-many-requests-how-can-i-solve-this)
You have to use the option --cookies in combination with a current/correct cookie file.
Here the steps I followed
1. if you use Firefox, install addon cookies.txt, enable the addon
2. clear your browser cache, clear you browser cookies (privacy reasons)
3. go to google.com, and log in with your google account
4. go to youtube.com
5. click on the cookies.txt addon, and export the cookies, save it as cookies.txt (in the same directory from where you are going to run youtube-dl)
6. this worked for me ... youtube-dl --cookies cookies.txt https://www.youtube.com/watch?v=....
Hope it helps.
use --force-ipv4 option in command.
youTube-dl --force-ipv4 ...
What you should do is handle that error by retrying the requests that are throttled.

AWS EC2 t2.small Instance CPU utilization spikes to 100% at regular interval every day

My EC2 t2.small instance CPU Utilization goes to 100% every day at the same time, roughly between 21:25 and 21:30 server time.
I have checked syslog and apache log and found nothing unusual during that time. Also, I have checked my cron jobs and system cron jobs and found no daily cron jobs running at that time (/etc/cron.daily is scheduled at 6:25 and executes correctly at that time according to logs).
Any ideas what could cause this behavior?
OS: Ubuntu 16.04
After a lot of search, logging, trial and error, I have found apache2 was causing this.
For some reason, the process hangs at 100% on some occasions, specifically when performing SSL test on ssllabs.com. Found this line in Apache error_log:
[ssl:error] [pid 29110] [client 64.41.200.104:58242] AH02042: rejecting client initiated renegotiation
Again, after some trial and error, the solution that fixed the issue was to update the SSLCipherSuite apache directive in /etc/apache2/mods-available/ssl.conf
Here is the value I used:
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
After this change, the problem never repeated.
I hope this will help someone else in the same situation.
My service ran into a similar issue (CPU spikes on a daily cron job). Turns out that logrotate was compressing a very large log file every morning using pbzip2. A code change was made to cut down on the spammy logs, resolving the issue.

Why is AWS EC2 CPU usage shooting up to 100% momentarily from IOWait?

I have a large web-based application running in AWS with numerous EC2 instances. Occasionally -- about twice or thrice per week -- I receive an alarm notification from my Sensu monitoring system notifying me that one of my instances has hit 100% CPU.
This is the notification:
CheckCPU TOTAL WARNING: total=100.0 user=0.0 nice=0.0 system=0.0 idle=25.0 iowait=100.0 irq=0.0 softirq=0.0 steal=0.0 guest=0.0
Host: my_host_name
Timestamp: 2016-09-28 13:38:57 +0000
Address: XX.XX.XX.XX
Check Name: check-cpu-usage
Command: /etc/sensu/plugins/check-cpu.rb -w 70 -c 90
Status: 1
Occurrences: 1
This seems to be a momentary occurrence and the CPU goes back down to normal levels within seconds. So it seems like something not to get too worried about. But I'm still curious why it is happening. Notice that the CPU is taken up with the 100% IOWaits.
FYI, Amazon's monitoring system doesn't notice this blip. See the images below showing the CPU & IOlevels at 13:38
Interestingly, AWS says tells me that this instance will be retired soon. Might that be the two be related?
AWS is only displaying a 5 minute period, and it looks like your CPU check is set to send alarms after a single occurrence. If your CPU check's interval is less than 5 minutes, the AWS console may be rolling up the average to mask the actual CPU spike.
I'd recommend narrowing down the AWS monitoring console to a smaller period to see if you see the spike there.
I would add this as comment, but I have no reputation to do so.
I have noticed my ec2 instances have been doing this, but for far longer and after apt-get update + upgrade.
I tough it was an Apache thing, then started using Nginx in a new instance to test, and it just did it, run apt-get a few hours ago, then came back to find the instance using full cpu - for hours! Good thing it is just a test machine, but I wonder what is wrong with ubuntu/apt-get that might have cause this. From now on I guess I will have to reboot the machine after apt-get as it seems to be the only way to put it back to normal.

AWS Elastic Beanstalk Worker timing out after inactivity during long computation

I am trying to use Amazon Elastic Beanstalk to run a very long numerical simulation - up to 20 hours. The code works beautifully when I tell it to do a short, 20 second simulation. However, when running a longer one, I get the error "The following instances have not responded in the allowed command timeout time (they might still finish eventually on their own)".
After browsing the web, it seems to me that the issue is that Elastic Beanstalk allows worker processes to run for 30 minutes at most, and then they time out because the instance has not responded (i.e. finished the simulation). The solution some have proposed is to send a message every 30 seconds or so that "pings" Elastic Beanstalk, letting it know that the simulation is going well so it doesn't time out, which would let me run a long worker process. So I have a few questions:
Is this the correct approach?
If so, what code or configuration would I add to the project to make it stop terminating early?
If not, how can I smoothly run a 12+ hour simulation on AWS or more generally, the cloud?
Add on information
Thank you for the feedback, Rohit. To give some more information, I'm using Python with Flask.
• I am indeed using an Elastic Beanstalk worker tier with SQS queues
• In my code, I'm running a simulation of variable length - from as short as 20 seconds to as long as 20 hours. 99% of the work that Elastic Beanstalk does is running the simulation. The other 1% involves saving results, sending emails, etc.
• The simulation itself involves using generating many random numbers and working with objects that I defined. I use numpy heavily here.
Let me know if I can provide any more information. I really appreciate the help :)
After talking to a friend who's more in the know about this stuff than me, I solved the problem. It's a little sketchy, but got the job done. For future reference, here is an outline of what I did:
1) Wrote a main script that used Amazon's boto library to connect to my SQS queue. Wrote an infinite while loop to poll the queue every 60 seconds. When there's a message on the queue, run a simulation and then continue through with the loop
2) Borrowed a beautiful /etc/init.d/ template to run my script as a daemon (http://blog.scphillips.com/2013/07/getting-a-python-script-to-run-in-the-background-as-a-service-on-boot/)
3) Made my main script and the script in (2) executable
4) Set up a cron job to make sure the script would start back up if it failed.
Once again, thank you Rohit for taking the time to help me out. I'm glad I still got to use Amazon even though Elastic Beanstalk wasn't the right tool for the job
From your question it seems you are running into launches timing out because some commands during launch that run on your instance take more than 30 minutes.
As explained here, you can adjust the Timeout option in the aws:elasticbeanstalk:command namespace. This can have values between 1 and 1800. This means if your commands finish within 30 minutes you won't see this error. The commands might eventually finish as the error message says but since Elastic Beanstalk has not received a response within the specified period it does not know what is going on your instance.
It would be helpful if you could add more details about your usecase. What commands you are running during startup? Apparently you are using ebextensions to launch commands which take a long time. Is it possible to run those commands in the background or do you need these commands to run during server startup?
If you are running a Tomcat web app you could also use something like servlet init method to run app bootstrapping code. This code can take however long it needs without giving you this error message.
Unfortunately, there is no way to 'process a message' from an SQS queue for more than 12 hours (see the description of ChangeVisibilityTimeout).
With that being the case, this approach doesn't fit your application well. I have ran into the same problem.
The correct way to do this: I don't know. However, I would suggest an alternate approach where you grab a message off of your queue, spin off a thread or process to run your long running simulation, and then delete the message (signaling successful processing). In this approach, be careful of spinning off too many threads on one machine and also be wary of machines shutting down before the simulation has ended, because the queue message has already been deleted.
Final note: your question is excellently worded and sufficiently detailed :)
For those looking to run jobs shorter than 10 hours, it needs to be mentioned that the current inactivity timeout limit is 36000 seconds, so exactly 10 hours and not anymore 30 minutes, like mentioned in posts all over the web (which led me to think a workaround like described above is needed).
Check out the docs: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html
A very nice write-up can be found here: https://dev.to/rizasaputra/understanding-aws-elastic-beanstalk-worker-timeout-42hi