MWAA - environments constantly loading - amazon-web-services

I'm currently trying to set up an Airflow environment via MWAA. I've gone through the create environment steps twice with both ending at the page listing Airflow environments with a banner saying I was successful. However, for the past 2 days, this environments page has just shown Loading Environments, as shown below. I also see a (0) for the environment number.
So far, I've added 2 interfaces for ECR and VPC for the API and the environment but no luck. Has anyone else run into this issue or have any clue what might be happening? Thanks!

Were you able to find the solution to this issue? I had similar issues when I tried to set up the first-time MWAA on AWS Account.
https://github.com/awslabs/aws-support-tools/tree/master/MWAA
Here's a link to how to verify if all the resources are set up correctly for MWAA. If you run the script mentioned on the repo you should be able to see where the issue lies.

Related

setting up Strapi on AWS - the docs don't seem to show everything

I am trying to setup Strapi on AWS and am have been following the Strapi documentation here. But when I get to starting the EC2 instance, I have run into a few errors that were not addressed in in the docs. App keys, jwt secrets, and other env config variables. Has anyone else tried this and run into similar issues? And how did you get around things? I have been running pm2 log in my EC2 instance to get the errors.
I would post my config and everything but I have followed the strapi docs to the letter, and have run through every single step a dozen times now, and I don't want to basically copy and paste the docs in here again. My last step is to get Strapi to run on the EC2 instance, but I keep running into errors like this
error: Missing jwtSecret. Please, set configuration variable "jwtSecret" for the users-permissions plugin in config/plugins.js (ex: you can generate one using Node with `crypto.randomBytes(16).toString('base64')`).
which don't seem to be addressed in the docs anywhere.
This is because you need to expose the JWT_SECRET (and ADMIN_JWT_SECRET) environment variable. You can do it by adding (you can add a line like this: JWT_SECRET=A_RANDOM_STRING_HERE) it to the .env file located at the root of your Strapi project.
You can find more details here: https://docs.strapi.io/developer-docs/latest/plugins/users-permissions.html#jwt-configuration
You can generate jwt
node -e "console.log(require('crypto').randomBytes(256).toString('base64'));"

Application information missing in Spinnaker after re-adding GKE accounts - using spinnaker-for-gke

I am using a Spinnaker implementation set up on GCP using the spinnaker-for-gcp tools. My initial setup worked fine. However, we recently had to re-configure our GKE clusters (independently of Spinnaker). Consequently I deleted and re-added our gke-accounts. After doing that the Spinnaker UI appears to show the existing GKE-based applications but if I click on any of them there are no clusters or load balancers listed anymore! Here are the spinnaker-for-gcp commands that I executed:
$ hal config provider kubernetes account delete company-prod-acct
$ hal config provider kubernetes account delete company-dev-acct
$ ./add_gke_account.sh # for gke_company_us-central1_company-prod
$ ./add_gke_account.sh # for gke_company_us-west1-a_company-dev
$ ./push_and_apply.sh
When the above didn't work I did an experiment where I deleted the two account and added an account with a different name (but the same GKE cluster) and ran push_and_apply. As before, the output messages seem to indicate that everything worked, but the Spinnaker UI continued to show all the old account names, despite the fact that I deleted them and added new ones (which did not show up). And, as before, not details could be seen for any of the applications. Also note that hal config provider kubernetes account list did show the new account name and did not show the old ones.
Any ideas for what I can do, other than complete recreating our Spinnaker installation? Is there anything in particular that I should look for in the Spinnaker logs in GCP to provide more information?
Thanks in advance.
-Mark
The problem turned out to be that the data that was in my .kube/config file in Cloud Shell was obsolete. Removing that file, recreating it (via the appropriate kubectl commands) and then running the commands mentioned in my original description fixed the problem.
Note, though, that it took a lot of shell script and GCP log reading by our team to figure out the problem. Ultimately, what would have been nice would have been if the add_gke_account.sh or push_and_apply.sh scripts could have detected the issue, presumably by verifying that the expected changes did, in fact, correctly occur in the running spinnaker.

Customised Google Cloudshell image does not launch

Customised Google Cloud Shell image fails to launch, error is 'Cloud Shell is experiencing some issues provisioning a VM to you. Please try again in a few minutes'. Repeated attempts to launch also fail.
I created a custom Google Cloudshell Image with an Ansible lab environment and setup tutorial. When this was tested approximately 10 days ago, it seemed to work as expected. Setup was performed using the following guide
Project is hosted with the 'Open in Google Cloud Shell' button here
For convenience, this is the launch button as a link
The customised Cloud Shell image is hosted at gcr.io/diveintoansible/diveintoansible-lab-gcp-cloudshell
I've checked the permissions and these appear to be open to the public as desired.
Any advice on resolving this, greatly appreciated.
This usually happens because the base image is out of date. If your image worked a few weeks ago, you probably just need to rebuild it.

AWS Amplify environment 'dev' not found

I'm working with AWS Amplify, specifically following this tutorial AWS-Hands-On-Tutorial.
I'm getting a build failure when I try to deploy the application.
So far I have tried creating multiple backend environments and connecting them with the frontend, hoping that this would alleviate the issue. The error message leads me to believe that the deploy is not set up to also detect the backend environment, despite that I have it set to do so.
Also, I have tried changing the environment that is set to deploy with the frontend by creating another develop branch to see if that is the issue.
I've had no success with trying any of these, the build continues to fail. I have also tried running the 'amplify env add' command as the error message states. I have not however tried "restoring its definition in your team-provider-info.json" as I'm not sure what that entails and can't find any information on it. Regardless, I would think creating a new environment would solve the potential issues there, and it didn't. Any help is appreciated.
Due to the documentation being out of date, I completed the steps below to resolve this issue:
Under Build Settings > Add package version override for Amplify CLI and leave it as 'latest'
When the tutorial advises to "update your front end branch to point to the backend environment you just created. Under the branch name, choose Edit...", where the tutorial advises to use 'dev' it actually had us setup 'staging', choose that instead.
Lastly, we need to setup a 'Service Role' under General. Select General > Edit > Create New Service Role > Select the default options and save the role, it should have a name of amplifyconsole-backend-role. Once the role is saved, you can go back to General > Edit > Select your role from the dropdown, if it doesn't show by default start typing it in.
After completing these steps, I was able to successfully redeploy my build and get it pushed to prod with authentication working. Hope it helps anyone who is running into this issue on Module 3 of the AWS Amplify Starter Tutorial!

AWS elasticbeanstak EbExtensionPostBuild and EbExtensionPreBuild executioners

Good day,
I am in the process of deploying some of my applications to elasticbeanstalk on AWS, now from reading the documentation and tutorials i get it all deployed and working, but there is a big thing missing in the AWS documentation that i need to know. i can not find the information i am seeking for anywhere, can someone please give me a link to the documentation explaining this or just explain it to me please.
Who and what and from where does is the EbExtensionPreBuild and EbExtensionPostBuild actions executed, who calls them, what do they run and where do they get the commands from?
There are in total 6 actions being performed and nowhere on the internet does AWS explain what happends in these actions.
InfraWriteConfig...
DownloadSourceBundle...
EbExtensionPreBuild...
AppDeployPreHook...
EbExtensionPostBuild...
InfraCleanEbextension...
can someone please explain these actions and link them to the bits of code they execute from the .ebextensions folder .config files.
Thank you
The environment that used to answer you question is a PHP 7.3 running on 64bit Amazon Linux/2.9.2, but, maybe to others platforms, like docker the answer is the same, or at least in how to find the answer.
You can find in log file /var/log/eb-commandprocessor.log the log of all tasks that was executed in your server, the most common task is the deployment task CMD-AppDeploy.
This task is responsible to execute the following scripts:
CMD-AppDeploy
First stage : AppDeployStage0
DownloadSourceBundle:
- /opt/elasticbeanstalk/bin/download-source-bundle
EbExtensionPreBuild
- /opt/elasticbeanstalk/eb_infra/infra-embedded_prebuild.rb
AppDeployPreHook
- /opt/elasticbeanstalk/hooks/appdeploy/pre
EbExtensionPostBuild
/opt/elasticbeanstalk/eb_infra/infra-embedded_postbuild.rb
InfraCleanEbextension
/opt/elasticbeanstalk/eb_infra/infra-clean_ebextensions_dir.rb
Second stage : AppDeployStage1
AppDeployEnactHook
- /opt/elasticbeanstalk/hooks/appdeploy/enact
AppDeployPostHook
- /opt/elasticbeanstalk/hooks/appdeploy/post
You have more than one task available in Beanstalk, you can find the full config in file: /opt/elasticbeanstalk/deploy/configuration/containerconfiguration
Each script is a small part in the deployment process, if you need more details in how the deployment is done, I suggest you check each script individually.