Profile file cannot be null error using DBeaver under Windows10 AWS Athena IAM profile - amazon-iam

I checked this question and this thread in GitHub. It works for Mac but doesn't work for Windows.
And I still getting the error.
Looks like dbeaver doesn't have access to the credentials file.
Any ideas?
Below you may see my settings:

Try adding a file called config in the same .aws directory with the following structure:
[your_DataAccess_profile_name]
your-aws-region
Pay attention to file extensions, it should be:
config and
credentials
not
config.txt and credentials.txt

Related

How do i create aws config and credential in windows system.?

I have accidentally deleted aws credential and config file from the location c:\user\admin.aws.
Now when i use the aws cli through powershell its throwing an error saying profile not found ,i am unable to create or get those two files. How do i do it?
I tried creating these files using notepad which did not work for me.
I think the path for the files would be "c:\users\admin\.aws\", right?
Once the files are added there, with the right settings, just try
aws sts get-caller-identity
to check if the profile's configuration files are accessible by the command line.

AWS Elastic Beanstalk - .ebextensions

My app currently uses a folder called "Documents" that is located in the root of the app. This is where it stores supporting docs, temporary files, uploaded files etc. I'm trying to move my app from Azure to Beanstalk and I don't know how to give permissions to this folder and sub-folders. I think it's supposed to be done using .ebextensions but I don't know how to format the config file. Can someone suggest how this config file should look? This is an ASP.NET app running on Windows/IIS.
Unfortunately, you cannot use .ebextensions to set permissions to files/folders within your deployment directory.
If you look at the event hooks for an elastic beanstalk deployment:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-windows-ec2.html#windows-container-commands
You'll find that commands run before the ec2 app and web server are set up, and
container_commands run after the ec2 app and web server are setup, but before your application version is deployed.
The solution is to use a wpp.targets file to set the necessary ACLs.
The following SO post is most useful
Can Web Deploy's setAcl provider be used on a sub-directory?
Given below is the sample .ebextensions config file to create a directory/file and modify the permissions and add some content to the file
====== .ebextensions/custom_directory.config ======
commands:
create_directory:
command: mkdir C:\inetpub\AspNetCoreWebApps\backgroundtasks\mydirectory
command: cacls C:\inetpub\AspNetCoreWebApps\backgroundtasks\mydirectory /t /e /g username:W
files:
"C:/inetpub/AspNetCoreWebApps/backgroundtasks/mydirectory/mytestfile.txt":
content: |
This is my Sample file created from ebextensions
ebextensions go into the root of the application source code through a directory called .ebextensions. For more information on how to use ebextensions, please go through the documentation here
Place a file 01_fix_permissions.config inside .ebextensions folder.
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/49_change_permissions.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo chown -R ec2-user:ec2-user tmp/
Following that you can set your folder permissions as you want.
See this answer on Serverfault.
There are platform hooks that you can use to run scripts at various points during deployment that can get you around the shortcomings of the .ebextension Commands and Platform Commands that Napoli describes.
There seems to be some debate on whether or not this setup is officially supported, but judging by comments made on the AWS github, it seems to be not explicitly prohibited.
I can see where Napoli's answer could be the more standard MS way of doing things, but wpp.targets looks like hot trash IMO.
The general scheme of that answer is to use Commands/Platform commands to copy a script file into the appropriate platform hook directory (/opt/elasticbeanstalk/hooks or C:\Program Files\Amazon\ElasticBeanstalk\hooks\ ) to run at your desired stage of deployment.
I think its worth noting that differences exist between platforms and versions such as Amazon Linux 1 and Linux 2.
I hope this helps someone. It took me a day to gather that info and what's on this page and pick what I liked best.
Edit 11/4 - I would like to note that I saw some inconsistencies with the File .ebextension directive when trying to place scripts drirectly into the platform hook dir's during repeated deployments. Specifically the File directive failed to correctly move the backup copies named .bak/.bak1/etc. I would suggest using a Container Command to copy with overwriting from another directory into the desired hook directory to overcome this issue.

Adding JDBC jar driver to classpath for AWS Elastic Beanstalk job

I have an Elastic Beanstalk application that I'm trying to configure to connect to a FileMaker Pro database, over JDBC. The code I'm using is:
import jaydebeapi as jdp
jdbc_driver_location = '/tmp/fmjdbc.jar'
conn = jdb.connect(jdbc_driver_class,
jdbc_connection_type + '://' + db_url + '/' + db_name,
[user_name, password], jdbc_driver_location,)
When I attempt this, I get the following error:
java.sql.SQLException: No suitable driver found for jdbc:filemaker://10.120.120.108/carecord-<class 'jpype._jexception.java.sql.SQLExceptionPyRaisable'>
To try and solve to problem, I've added the jdbc.jar to both the /tmp folder of the Ec2 instance, as well as included it in the project directory. When if I SSH into the EC2 instance and issue the command:
JAVA_HOME=/tmp/fmjdbc.jar
The program will run the next time it's prompted, without issue. After a few hours it will give the original error and need to be issued the above command again to work. To fix this I tried adding the following to /.ebextensions, to copy the .jar into the tmp folder from the project directory and issue the above command to the server from the start:
commands:
command01:
command: sudo cp /opt/python/current/app/fmjdbc.jar /tmp/fmjdbc.jar
command02:
command: JAVA_HOME=/tmp/fmjdbc.jar
But the project still gives the error. Any thoughts on how I can add this driver to the classpath such that the job will run consistently?
To help folks who have this issue in the future, the answer to this that I found was at the end of this thread.
I appended the following:
if jpype.isJVMStarted() and not jpype.isThreadAttachedToJVM():
jpype.attachThreadToJVM()
jpype.java.lang.Thread.currentThread().setContextClassLoader(jpype.java.lang.ClassLoader.getSystemClassLoader())
Just above the
jdbc_driver_location = '/tmp/fmjdbc.jar'
section of my original code above. This allows the application to loop and successfully find the necessary driver.
JAVA_HOME is supposed to point to the location where Java is installed on the server. You don't use JAVA_HOME to add libraries to the classpath. You shouldn't have to set any environment variables for your code to work.
The root of your problem is that you are copying the file to /tmp/fmjdbc.jar but you are setting jdbc_driver_location to be /tmp/jdbc.jar. Notice how those file names are different. To fix your code change it to this:
jdbc_driver_location = '/tmp/fmjdbc.jar'

Elastic Beanstalk ebextension config files using VS toolkit

Im trying to set Folder Permissions for creating additional folders/files inside Temp directory in my .Net Project.
I know there a lot of references for a similar question like this (as given)
Running a .config file on Elastic Beanstalk?
How To Set Folder Permissions in Elastic Beanstalk Using YAML File?
Im having trouble verifying if its given the permission, im not able to create any folders/files in the Temp folder; i cant find any errors either during deployment in the Elastic Beanstalk (last 100 lines) related to permission setting.
Im using the following code in my config file
command: icacls \"C:/inetpub/wwwroot/myapp_deploy/Temp\" /grant DefaultAppPool:(OI)(CI)
(i have replaced myapp with the EB application name)
Please help anyone.
i was having trouble posting the code and a general unknown exception error was being generated with no additional details.
Figured out the mistake i was making:
ICACLS command requires the directory path to have double slash '\'.
Verified command for above would be:
command: "icacls \"C:\inetpub\wwwroot\Temp\" /grant DefaultAppPool:(OI)(CI)F"

Error when using AWS-SDK-GO (NoCredentialProviders: no valid providers in chain)

I've recently started using the aws-sdk-go package.
Walking through the instructions, my folder structure is as follows:
bin/ , pkg/ (as always)
src/
app/main.go (code taken from the docs)
github.com/aws
Now when I run go install, and then execute the app.exe (using windows here), I'm getting the following error:
panic: NoCredentialProviders: no valid providers in chain
Any ideas?
You need to provide an AWS access key and secret key to authenticate and use AWS services.
See the README here https://github.com/aws/aws-sdk-go#configuring-credentials
If anyone runs into the same issue I had with this:
I read a doc that said to put the file at %USERPROFILE%.awscredentials on a Windows, but they just forgot the slash. It should be %USERPROFILE%.aws/credentials.
Double check the format of your ~/.aws/credential file.
In my case, the credentials used the following format :
[profile]
AWS_ACCESS_KEY_ID=xxxx
AWS_SECRET_ACCESS_KEY=yyyy
changing it to the following fixed the issue :
[profile]
aws_access_key_id = xxxx
aws_secret_access_key = yyyy