Digital Ocean API key change how to update with doctl? - digital-ocean

My old Digital Ocean API key was deleted and a new one created. How do I update this to the new value?
doctl auth init points to the old value. I did not see any documentation on this. I unistalled doctl and ran rm -rf $HOME/Library/Application Support/doctl then reinstalled and still pointing to invalid key.
Thank you for your time and energy in advance.

Jeff the following command can be used to update your key
doctl auth init --access-token <your-new-key>
this should update the configs for further use as well. If not check your config.yaml to ensure the token was updated.

Related

How to reset Production Amplify DB

I am still currently in development stage of a site and I would like to clean the current live DB completely as well as flush the previous schemas. As I am making so many changes to the schemas that I am not longer able to amplify push
Is this possible with AWS amplify?
Attempting to edit the global secondary index byDivision on the DivisionTable table in the Division stack.
An error occurred during the push operation: Attempting to edit the global secondary index
byDivision on the DivisionTable table in the Division stack.
You can simply run:
amplify api remove
amplify api push
so it will remove the API (including DDB). Make sure you firstly check out on the correct environment.
After that you need to add API once again using:
amplify api add
However if you want to do create a clean environment from scratch, then simply:
amplify env remove <env_name>
amplify init
It will firstly remove the environment and then recreate it. It takes a while but works fine!

APEX_ADMINISTRATOR_ROLE in AWS RDS Oracle Instance

I am trying to install APEX on my AWS Oracle 12 RDS Instance. In order to achieve this, I am following these instructions : http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.APEX.HTML
However, I got stucked in step 7:
Step 7:
You must set a password for the APEX admin user. To do this, use
SQL*Plus to connect to your DB instance as the master user, and then
issue the following commands:
grant APEX_ADMINISTRATOR_ROLE to master;
#/home/apexuser/apex/apxchpwd.sql
Replace master with your master user name. When the apxchpwd.sql
script prompts you, type a new admin password
When I log into my my RDS Instance with my master user and execute this:
grant APEX_ADMINISTRATOR_ROLE to [mymasteruser];
I received this error:
ERROR at line 1:
ORA-01924: role 'APEX_ADMINISTRATOR_ROLE' not granted or does not exist
Can you please help me to solve this?
Edit 12/09/2017.
Using this post/answer:
https://serverfault.com/questions/276541/how-do-you-recover-you-rds-master-user-username
I understand my master user is shown in the following image. As I know, in RDS instance i have no access to sys or system user, so this is the only user i can use.
Many thanks
Edit 20/09/2017.
I applied Alex solution, and it works!!. However, some issues to comment:
The tutorial was changed, in fact the url changed, now is
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.APEX.html (the last "html" was in uppercase before)
but is not reliable now, there are some points that should be fixed, e.g. it says now that RDS support Oracle APEX version 5.1.2, i tried with this versión and I got this error:
Also, some directories dont match with the previos step ....
So, I used the versión that the tutorial originally says : Oracle APEX version 4.2.6.v1
I had to execute both statements :
EXEC rdsadmin.rdsadmin_util.grant_apex_admin_role;
grant APEX_ADMINISTRATOR_ROLE to [master];
Then i could execute the apxchpwd.sql script successfully!!.
But, unfortunately, when I accessed to my apex home page and tried to create a new workspace "ws_prueba", I receive this error (Im trying to create it with my apex admin user):
Any ideas?
Use
EXEC rdsadmin.rdsadmin_util.grant_apex_admin_role;
instead. I have a case open on this with AWS and just asked them to update the documentation page.

Error provisioning namespace. ORA-20001 Request could not be processed at Oracle Apex

I finally managed to install Oracle Apex 5.1.2 but I have problem with creating a workspace. Whenever I try to do so at the end I get an error:
I tried to create this workspace with following values:
The strange thing is that when I try to use Yes as option to Reuse Existing Schema no schemas are listed. Is it possible that Apex somehow doesn't have access to managing schemas?
I am using APEX with ORDS. At home page I get info that I have 1 workspace and 1 schema.
I've tried:
Using strong passwords as mentioned here
Changing provisioning type to request: Effect is the same. If user request a space and I accept it I get the exact same error.
Enabled OMF with parameter DB_CREATE_FILE_DEST = '/u01/app/oracle/oradata' -> *.dbf files are not created before and after the change in directory.
The root cause of this problem was installing APEX both on CDB$ROOT, so as a result, and on PDB1. I uninstalled APEX from root, repaired with #utlrp.sql script as in this tutorial and installed APEX again, but only on PDB1. Workspace was successfully created.
I had the same problem (apex 18.1/ords) in a database without CDB configured. The solution in my case was to run #apex_rest_config.sql script.
After that, the workspace is created without any problem.
If you don't want to reinstall apex to move it from the CDB to the PDB I suggest you try setting PDB mapping in your ords config file.
https://docs.oracle.com/en/database/oracle/oracle-rest-data-services/20.2/aelig/configuring-REST-data-services.html#GUID-694B2F89-CE4F-4AB0-88E2-EB35D03DEC3C
I did it by adding
<entry key="db.serviceNameSuffix"></entry>
to the end of my defaults.xml (you can find its location by running
$ java -jar ords.war configdir ).
Then access apex with /yourpdb in the path: e.g.
http://server:port/ords/pdb1
This will run apex from that PDB instead of from the CDB and will create the workspace in there, that should work OK. It did for me.
I had same problem at ORACLE 12c, according to this link my problem has been solved. The problem is the users can't create workspace in CDB, so you must change session container to pdf files by the following steps :
$root> cd ~/TEMP/apex
$root> sqlplus
Enter user-name: sys as sysdba
Enter password:
SQL> exec dbms_xdb.sethttpport(0); /*set port*/
SQL> alter session set container=YOURAPPEXPDB;
SQL> exec dbms_xdb.sethttpport(8181);
SQL> alter system register;
//install oracle apex again
to remove oracle apex i use this link, its perfectly worked for me.

Installing PDO in amazon AWS

im trying to install simple app in Amazon AWS. Since im really new to servers i used Elastic Beanstalk.
Everythink ok, but when i run my app i get an error: PDO error: could not find driver.
I tried mysqli_ping the connection and got boolean true, so this is OK.
I checked for help, but all i found is here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_PHP.rds.html
2.If you plan to use PDO, install the PDO drivers. For more information, go to http://www.php.net/manual/pdo.installation.php.
But i really don't know what to do with this information. Any help?
so, its quite a procedure, first you have to get ssh access to your instance:
1. Generate key value for you instance & download it in pem format.
go to: https://console.aws.amazon.com/ec2/v2/home?region=eu-west-1#Instances (change for your region)
click Key Pairs, Create Key pairs and create your new key par and download it to you comp.
assosiate your instance with the key, go to elastic beanstalk, select you application, select configuration, instances and select your new key from drop-down of EC2 key pair.
2. download putty for windows (installer) & install it: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
3. Transform key to .pkk format using PuTTYgen:
http://www.techrepublic.com/blog/the-enterprise-cloud/connect-to-amazon-ec2-with-a-private-key-using-putty-and-pageant/?tag=nl.e011#.
4. Setup putty to use key: http://docs.aws.amazon.com/gettingstarted/latest/wah-linux/getting-started-deploy-app-connect.html
5. run putty and find you instance public dns and add ec2-user in front of it, so it looks like this: ec2-user#ec2-54-76-47-0.eu-west-1.compute.amazonaws.com
Then is as simple as: yum install php-pdo

Can not run 'rake paperclip:refresh:thumbnails CLASS=Spree::Image' in rails spree app console getting No Such Key

I am trying to RAILS_ENV=production run rake paperclip:refresh:thumbnails CLASS=Spree::Image
on my remote server in my current rails app directory, so I can refresh the spree images that I have uploaded in the past.
I am using S3, my bucket is setup correctly as I can see each of my product's images in individual ID folders in my AWS S3 bucket.
But each time I run the above command I get a 'No Such Key' Error when the rake is aborted.
This command runs locally and works fine. (obviously without the RAILS_ENV=production locally)
Ok so I wrote this question to answer it myself. I hope the question makes sense.
For clarity, I had this issue because it was old images (old non existing paths that were associated with an old S3 Key) that I had uploaded with another S3 Key in previous testing on the same rails app. I did this earlier while trying to get S3 to work with my Rails Spree Application.
What I did to solve this was go into my Rails console on my remote server with this command:
$RAILS_ENV=production rails c
I then ordered the list of all Spree:Images them with this:
$y Spree::Image.all(:order => 'attachment_updated_at')
The 'y' is a nice little yaml way of displaying the information of the Spree:Image that's a little more human.
Next I looked at the ID of each Image and noticed that there was a good amount of them with IDs that did not match folders in my AWS S3 bucket.
In my Case the lowest ID number that was in fact a folder in my S3 bucket was '1078' so I ran this:
$Spree::Image.where('id < ?', 1078).destroy_all
This deleted any Spree::Image that had an ID of 1077 or less.
Finally, I closed rails console and ran this on my remote server inside my current rails app directory. (In my case is was /home/deployer/apps/potentialapp/current/)
$RAILS_ENV=production rake paperclip:refresh:thumbnails CLASS=Spree::Image
This reformatted my uploaded images on Spree and everything is now working great.
Hope this saves someone a great big headache. (Oh and empty your cache when you go to test and see if the images have in fact reloaded, almost cried at 4 am last night.)
I solved the same problem using the console and skipping errors (old/broken S3 assets):
Spree::Image.all.each { |i| i.attachment.reprocess! rescue nil }