I just started playing with Inferstracture as a code in Google cloud.
Installed Terraform
Installed Terraformer
Created a new GCP project with a virtual machine in it.
My goal is to duplicate the project, with all it's component, into a new project.
In order to do so, I using Terraformer to reverse terraform my existing project. Command:
$ terraformer import google --connect --projects=[project_id] --resources=autoscalers,backendBuckets,backendServices,bigQuery,cloudFunctions,cloudsql,dataProc,disks,dns,firewalls,forwardingRules,gcs,gke,globalAddresses,globalForwardingRules,healthChecks,httpHealthChecks,httpsHealthChecks,iam,images,instanceGroupManagers,instanceGroups,instanceTemplates,instances,interconnectAttachments,kms,memoryStore,monitoring,networkEndpointGroups,networks,nodeGroups,nodeTemplates,project,pubsub,regionAutoscalers,regionBackendServices,regionDisks,regionInstanceGroupManagers,routers,routes,schedulerJobs,securityPolicies,sslPolicies,subnetworks,targetHttpProxies,targetHttpsProxies,targetInstances,targetPools,targetSslProxies,targetTcpProxies,targetVpnGateways,urlMaps,vpnTunnels
2019/06/20 08:00:08 google importing project [project_id]
2019/06/20 08:00:08 google importing... autoscalers
2019/06/20 08:00:19 googleapi: got HTTP response code 404 with body: Not Found
Seems like I have kind of permission problem since google-api reply with Not-Found error code.
I guess Terraformer is accessing is using gcloud permissions to access my gcp environment, is this true?
If it's true, my logged in credientails are owner on this project.
What should I check? How to fix this issue?
You can use service account with read access to project. And set GOOGLE_CLOUD_KEYFILE_JSON to point to credentials.json of that service account.
Related
I have recently installed a Moodle 4.0.2. It's a bitnami installation and it came with Amazon S3 repository plugin installed as well. When I try to add a file to a course and use the filepicker, it shows all the buckets, but when I click on a specific bucket, I see this error:
Debug info: S3::getBucket(): [InvalidRequest] The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. Error code: errorwhilecommunicatingwith
Any ideas how to solve this?
Same error, moodle 4.0 native Amazon s3 repository not working with AWS4-HMAC-SHA256.
Alternative is not oficial plugin: https://moodle.org/plugins/repository_s3bucket
Hi I am trying to upload a file from GCS to Gdrive using
airflow.contrib.operators.gcs_to_gdrive_operator import GcsToGDriveOperator
This is how the dag looks like
copy_to_gdrive = GcsToGDriveOperator(
task_id="copy_to_gdrive",
source_bucket="my_source_bucket_on_gcs",
source_object="airflow-dag-test/report.csv",
destination_object="/airflow-test/report.csv",
gcp_conn_id="bigquery_default",
dag=dag
)
This code executes without any errors and in the logs I can see the file is downloaded to local successfully and uploaded to gdrive successfully as well.
This code is executed by a service account, the issue i am facing is I am not able to find the file or the directory this dag is creating uploading
I have tried several permutation/combinations of path for "destination_object" but nothing seems to work also google docs are not helpful as well.
I can see in the api logs that that the drive.create api is being called but where it is creating the file is unknown. Has anyone experienced this ? any help or tip would be of great help. Thanks!
Your Service account is a Google account, and, as google account, it has access to its own drive. The file are correctly copied to Drive, but to the drive of the service account!
You never specify the account, so, how Airflow can know that it has to use yours?
Look at the operator documentation
delegate_to (str) – The account to impersonate, if any. For this to work, the service account making the request must have domain-wide delegation enabled.
Use this parameter, fill it with your email and activate the domain delegation wide to your service account.
when I want to put my project on GAE i get this error in SDK shell:
ERROR: (gcloud.app.deploy) Permissions error fetching application [apps/responsive-my-super-app-201910]. Please make sure you are using the correct project ID and that you have permission to view applications on the project.
You haven't authenticated the cloud SDK. Try running gcloud auth list. Is your email included in the credentialed accounts? If not, run glcoud auth login.
If you are listed in the credentialed accounts, then perhaps you haven't properly associated your project with your login, or you have a typo in your project name.
I configured a parse server on my AWS elastic beanstalk using this guid I've tested it and it all works fine
Now I can't find a way to deploy parse dashboard on my server.
I did deployed parse dashboard on my local host and connected it to the application on server, But this way I cannot manage (Add and remove) my apps.
Another problem is that parse dashboard missing cloud code on default, I found this on git, but I cant understand where do I add the requested endpoints, is it something like adding app.use('/scripts', express.static(path.join(__dirname, '/scripts'))); on the index.js file?
in order to deploy parse-dashboard to your EC2 you need to follow the Deploying Parse Dashboard section in parse-dashboard github page
parse-dashbard github page
Please make sure that when you deploy parse-dashboard you are using https and also basic authentication (it is also part of the guide)
Now regarding the cloud code: the ability to deploy cloud code via parse CLI and to view the nodejs code in parse dashboard are not available in parse-server but those are parse.com features. Cloud code in parse-server is handled by modifying the main.js file which exist under the cloud folder and deployment should be done manually by you but the big advantage in parse-server cloud code is that you can use any NodeJS module that you want from there and you are not restricted to the modules that were used by parse.com .
Another point about the dashboard. What you can do is to create an express application and then add parse-server and parse-dashboard as a middleware to your express application and deploy the whole application to AWS and then you can enjoy both parse-server (that will be available under the /parse path, unless you changed it to something else) and parse dashboard that will be available under the /dashboard path
Enjoy :)
I am currently creating a web app using Grails implementing Multi Tenant Single DB plugin. The plugin allows me to have multiple tenants on a single db using a tenantID to differentiate between tenants. The plugin detects witch tenant will deal the current request that is made on my app by resolving using different domains/subdomains for each tenant.
For example:
Tenant 1 = companyA.myapp.com
Tenant 2 = companyB.myapp.com
On my local machine running Grails development mode I was able to implement the different hosts by changing my /etc/hosts and each tenant would have their own subdomain.
I am currently interested in using cloud foundry as my cloud platform but when I deploy my app to cloud foundry it is already using my app name as the subdomain for cloud foundry.
For example:
- myapp.cloudfoundry.com
Is it possible to change or control the domain name resolver in
cloud foundry?
Does anybody know how to handle multi-tenant subdomains as explained above in cloud foundry? Probably provide the steps in implementing this using cloud foundry?
What is the best approach to implement this using cloud foundry?
My App is using Grails 2.0.4 and Multi Tenant Single DB plugin 0.8.2.
Thanks
Unfortunately the current beta version CloudFoundry does not allow modification of the cloudfoundry subdomain. The plan is to have the GA towards the end of this year with a private preview of the version of the site available sooner in the fall. At that time you could be able to customize the subdomain.
Therefore you might need to change a little in your TenantResolver to only check the subdomain that varies.
To implement your requirement, did you try installing the grails cf plugin? If not you can start from here.
If you are using cli, installing the plugin just needs command in your project workspace:
grails install-plugin cloud-foundry
When your app is ready for deployment, push it to cloudfoundry:
grails cf-push
Note that you will have to have your cf credentials configured in the grails config file.
After that you can map multiple URLs you want using:
grails cf-map user1.yourapp.cloudfoundry.com
If you have already known about vmc which is the client command line interface for cf, you can see the urls mapped with your app by
vmc apps
If not you can refer to the installation guide to start if you would like to do that.
If you are using STS/eclipse, things will be even easier. First you need to have grails-support extension as well as the cloud foundry integration installed. For detailed docs of the cf integration please refer here.
After your app is deployed, right click the project and choose "Grails Tools" -> "Open Grails Command Prompt". This will enable you to have same grails cf plugin commands as the CLI does.
Hope this can help your move forward in the cloudfoundry world. Let me know if you have more questions.
Thanks,
William