On migrating my local Oracle DB on Windows, from 11g to 19c, while running the setup.exe, I got a:
[INS-30014] Unable to check whether the location specified is on CFS
In my case, I needed to open my hosts file (C:\Windows\System32\drivers\etc\hosts) and comment out the following:
# 192.168.0.111 host.docker.internal
# 192.168.0.111 gateway.docker.internal
# 127.0.0.1 kubernetes.docker.internal
These seemed to be remnants of my old docker installation which has since been removed.
In my case this was a permissions issue on the Oracle Home directory. If your Windows PC name is >= 16 characters in length, the name of the administrators group is affected, because only the first 15 characters are included. The mismatch between that first part of the name and the full name is what was causing the issue. DESKTOP-ASUS-ROG vs. DESKTOP-ASUS-RO in my case (notice the missing G). I renamed the PC to DESKTOPASUS, restarted the machine, and it worked without issue. The name discrepancy was apparent when looking at the security configuration of the folder when my system name was over max.
You can get your PC name by running hostname from the command line. If it's >= 16 characters, rename the PC to be <= 15 characters, and restart.
The underlying issue is indirectly discussed here in a different context:
https://learn.microsoft.com/en-us/troubleshoot/windows-server/identity/naming-conventions-for-computer-domain-site-ou
Maximum name length: 15 characters.
I tried other solutions from Stackoverflow, but none worked in my case.
Here is the fix:
Go to Control Panel > Network and Internet > Network Connections
Disable
vEthernet (Docker)
vEthernet (Default switch)
resume the install
re-enable both disabled vEthernet.
By installing the software first and then running dbca to create the db I got it to work,
Execute hostname in command prompt
Add the hostname returned by hostname command next to IP address in windows hosts file. Also add the hostname with domain name next to it
9.115.154.54 LP1-AP-WIN73 LP1-AP-WIN73.yourdomain.com
Save hosts file and resume install
INS-30014: Unable to check whether the location specified is on CFS
Cause: The location specified might not have the required permissions.
Action: Provide a location which has the appropriate required permissions.
During installation of oracle 21c enterprise,
I got:
INS-30014: Unable to check whether the location specified is on CFS
In may case, in environment variable, I remove all unwanted path of oracle, which are not useful. And its work fine for me.
I faced this issue when I tried installing Oracle 19c on my Windows 10, I just disconnected from the internet(WiFi), and then the issue got disappeared.
While installing oracle 19 c I was faced with the INS-30014: Unable to check whether the location specified is on CFS
On windows 11 I was able to solve the error by connecting to the internet
Related
the company i'm working with is developing a web application based on Laravel Framework, using Google Cloud Platform infrastructures. The frontend VM is a Centos8 OS with Apache webserver installed.
Seems that a developer ran a pretty massive "dnf upgrade" which included: kernel, openssl ,kerberos and others packages.
After the upgrade, seems that ldconfig has lost his mind:
[developer#webserver ~]$ sudo su - root
sudo: error in /etc/sudo.conf, line 19 while loading plugin "sudoers_policy"
sudo: unable to load /usr/libexec/sudo/sudoers.so: /lib64/libldap-2.4.so.2: undefined symbol: EVP_md4, version OPENSSL_1_1_0
sudo: fatal error, unable to load plugins
same for other commands like "dnf" or "rpm":
[developer#webserver ~]$ rpm
rpm: symbol lookup error: /lib64/librpmio.so.8: undefined symbol: EVP_md2, version OPENSSL_1_1_0
after a little bit of investigations, i found that the same commands, specifing the LD_LIBRARY_PATH variable, are working:
[developer#webserver ~]$ LD_LIBRARY_PATH=/lib64 rpm
RPM version 4.14.3
Copyright (C) 1998-2002 - Red Hat, Inc.
This program may be freely redistributed under the terms of the GNU GPL
...
...of course, i can't do the same trick with "sudo" command.
Important fact is that the VM is still running and it was never rebooted ( i'll exaplin later why i'm sayin this )
( and finally..at the point )
The major problem is that we can't use root account cause "sudo" is not working and, by default, Google use Public Key Authentication as deafult method (Local users has random passwords genereated by GCP). So actually, i can't even do a "dnf reinstall" to try fix the issues
I was afraid that, once rebooted, every services stops to work because of the incorrect dependecies library path, so instead of doing a reboot, i have created an image based on the VM and then a new VM based on that image.
As i was thinking: Once booted the new VM, every services stopped working. i was able to read the logs over the serial console of GCP web interface.
a snippet:
...
Oct 27 20:20:30 webserver google_oslogin_nss_cache[783]: /usr/bin/google_oslogin_nss_cache: /lib64/libjson-c.so.4: no version information available (required by /usr/bin/google_oslogin_nss_cache)
Oct 27 20:20:30 webserver NetworkManager[778]: /usr/sbin/NetworkManager: symbol lookup error: /lib64/libldap-2.4.so.2: undefined symbol: EVP_md4, version OPENSSL_1_1_0
Oct 27 20:20:30 webserver google_oslogin_nss_cache[783]: /usr/bin/google_oslogin_nss_cache: symbol lookup error: /lib64/libldap-2.4.so.2: undefined symbol: EVP_md4, version OPENSSL_1_1_0
Oct 27 20:20:30 webserver sssd[771]: ldb: unable to dlopen /usr/lib64/ldb/modules/ldb/ldap.so : /lib64/libldap-2.4.so.2: undefined symbol: EVP_md4, version OPENSSL_1_1_0
...
Using Google official documentation, i found the "startup-script" section of the VM properties that can be launched at every boot and that can be used to "change" user's passwords.
I know that, by default, all VMs has root access disabled, so i made this and added to vm's "automation" script:
#! /bin/bash
echo 'developer:PASSWORD' | chpasswd
echo 'root:PASSWORD' | chpasswd
Once rebooted, i've tried to login using the "serial console" option on the web interface, but with no luck. I've also used journalctl ( as normal user ) to find something in the logs... but nothing.
i suppose that is a consequence of that "google_oslogin_nss_cache" error
there's no way to run that script.
Searching on the internet, i've found some posts where someone was able to login directly as "root" using the "gcloud compute ssh" command. So i have tried to login as described using another VM of the same project, using both my google account user and root user...but also in this way ...no luck.
( i forgot to mention that my google account has "project owner" role, so actually i have all necessary permissions )
is there another way to reset "root" password without using "sudo" or i have to reinstall the VM from start?
I'm sorry for the long explanation....hope that everything is clear
Thanks
So... actually this question is divided by 2 different issues:
The only possible way for me to recover "root" account was to stop the VM, detach the boot disk, mount the boot disk on a new VM, mount the filesystem and modify the user. once boot disk is reattached to the original VM..you can use the modified account
second issue was made by upgrading openssl...so in the end the only way to avoid that error messages was to create a new file: /etc/ld.so.conf.d/libc.conf:
/usr/lib64
I have deployed two rails apps to Digital Ocean, Ubuntu 18.04 with Passenger and Nginx.
Both apps were built on rails 5.2.2 with ruby 2.5.1, and the second app has all the same gems at the same versions. While the first app runs fine, the second will not launch.
The last useful line of the Passenger log says:
[ E 2020-08-06 22:41:56.6186 30885/T1i age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /var/www/html/AppName_Prod/current: The application encountered the following error: ActiveSupport::MessageEncryptor::InvalidMessage (ActiveSupport::MessageEncryptor::InvalidMessage)
I know this is somethign to do with the master.key file, but that is present and contains the correct key. I'm not using environment vars to store the master keys - they are in the master.key file inside each app's dir structure.
I've read every SO post I could find on this and none have solved my issue.
Any suggestions for getting these two apps (and more) to work on the same droplet?
I'm all out of ideas.
Thank you for any help you can offer.
For anyone who might have the same issue, it was a bit deceptive.
I had tried rails credentials:edit and it didn't fix the issue, but I found that the app's containing folder was owned by user:user, whereas my other app was owned by user:root.
When I changed this, everything started to work.
I hope it helps someone because I didn't find this info anywhere online and it was a lot of trial and error.
Use ls -l to list the current owner of folders in the current working directory, so you can compare them.
For me, this turned out to be somewhat complicated. I had provisioned my server using Ansible, which has a task to copy the Nginx conf. After provisioning the server, I changed RAILS_MASTER_KEY.
It turns out that my Ansible task does not re-write the Nginx conf if it already exists on the server (the file is not compared, I guess). So although I updated RAILS_MASTER_KEY in my Ansible playbook (and it was even getting copied across to the server's environment variables!), it was not accessible to Rails through passenger because it does not pass on the user's environment variables.
Whew!
To fix this (and create a snowflake server in the process...) I manually logged into the server and updated RAILS_MASTER_KEY to my new value in the Nginx passenger_env_var.
I have an Elastic Beanstalk application that I'm trying to configure to connect to a FileMaker Pro database, over JDBC. The code I'm using is:
import jaydebeapi as jdp
jdbc_driver_location = '/tmp/fmjdbc.jar'
conn = jdb.connect(jdbc_driver_class,
jdbc_connection_type + '://' + db_url + '/' + db_name,
[user_name, password], jdbc_driver_location,)
When I attempt this, I get the following error:
java.sql.SQLException: No suitable driver found for jdbc:filemaker://10.120.120.108/carecord-<class 'jpype._jexception.java.sql.SQLExceptionPyRaisable'>
To try and solve to problem, I've added the jdbc.jar to both the /tmp folder of the Ec2 instance, as well as included it in the project directory. When if I SSH into the EC2 instance and issue the command:
JAVA_HOME=/tmp/fmjdbc.jar
The program will run the next time it's prompted, without issue. After a few hours it will give the original error and need to be issued the above command again to work. To fix this I tried adding the following to /.ebextensions, to copy the .jar into the tmp folder from the project directory and issue the above command to the server from the start:
commands:
command01:
command: sudo cp /opt/python/current/app/fmjdbc.jar /tmp/fmjdbc.jar
command02:
command: JAVA_HOME=/tmp/fmjdbc.jar
But the project still gives the error. Any thoughts on how I can add this driver to the classpath such that the job will run consistently?
To help folks who have this issue in the future, the answer to this that I found was at the end of this thread.
I appended the following:
if jpype.isJVMStarted() and not jpype.isThreadAttachedToJVM():
jpype.attachThreadToJVM()
jpype.java.lang.Thread.currentThread().setContextClassLoader(jpype.java.lang.ClassLoader.getSystemClassLoader())
Just above the
jdbc_driver_location = '/tmp/fmjdbc.jar'
section of my original code above. This allows the application to loop and successfully find the necessary driver.
JAVA_HOME is supposed to point to the location where Java is installed on the server. You don't use JAVA_HOME to add libraries to the classpath. You shouldn't have to set any environment variables for your code to work.
The root of your problem is that you are copying the file to /tmp/fmjdbc.jar but you are setting jdbc_driver_location to be /tmp/jdbc.jar. Notice how those file names are different. To fix your code change it to this:
jdbc_driver_location = '/tmp/fmjdbc.jar'
I'm working on a number of projects on cloud9 IDE, and it's really frustrating that I can't get the better errors gem to work correctly. It isn't supposed to need initializing; it should just work out of the box. However, I still only get the usual ugly red errors page. I should specify that it is included in my gemfile, and I have bundle install already.
How can I get better errors to work correctly? Is there an installation step I'm missing?
The trick, I used, to get the 'better_errors' gem working in Cloud9 is setting the value of TRUSTED_IP to the public IP address of the computer my browser session is attached to. (As far as I can tell, it has nothing to do with the IP address of the remote server or Cloud9 server addresses.)
I'll outline the process I used to get 'better_errors' working on my Cloud9 workspace, from my Chromebook on my residential network... maybe it will also work for you and others!
Add gem "better_errors" to the development group in the project Gemfile.
Add gem "binding_of_caller" to the project Gemfile.
Run $bundle in the project Cloud9 terminal.
Edit the project config/environments/development.rb file and add the following line of code to the end of the Rails.application.configure block.
BetterErrors::Middleware.allow_ip! ENV['TRUSTED_IP'] if ENV['TRUSTED_IP']
Create a new "runner" in Cloud9 by clicking "Run" > "Run With" > "New Runner".
Cloud9 creates an basic runner file in a new tab to modify. Replace the contents of this file with following code.
{
"cmd": [
"bash",
"--login",
"-c",
"TRUSTED_IP=XXX.XXX.XXX.XXX rails server -p $port -b $ip $args"
],
"working_dir": "$project_path",
"info": "Your code is running at \\033[01;34m$url\\033[00m.\n\\033[01;31m",
"selector": "source.ru"
}
Replace XXX.XXX.XXX.XXX in the code above with the local computer's public IP address. (I use http://ifconfig.me/ to find the public IP assigned to my Chromebook.)
Save the runner file with the name RoR.run into the /.c9/runners path for the project.
Start the projects server by using this new runner. Click Run > Run With > RoR.
Use the popup link that Cloud9 displays, after the runner starts the server, to view the app. Enjoy 'better_errors'!
NOTE: I still have not figured out how to automate the process of feeding the external IP address of my local computer into the RoR.run file that lives on the Cloud9 workspace. I just update it manually every time I move to a new network or my external IP address changes.
WARNING: I actually just started learning RoR, so I have no idea if this is the "correct" way to get this gem to work in a cloud dev server/service environment. I also have no idea how safe this would be. I suspect that my solution exposes the 'better_errors' in-browser REPL to all computers that resolve to that same external IP address. If you are working on a sensitive codebase/database please do not implement my solution.
I just tested this in cloud9.io and this is the simplest way to make this work in cloud9.io:
Add the following line to config/environments/development.rb:
BetterErrors::Middleware.allow_ip! 'xxx.xxx.xxx.0/24'
where xxx.xxx.xxx is the first three sections of the IP address of the local machine that you are using to connect to cloud9.io
There is a good answer in the better errors issues and c0 docs.
Issues:
https://github.com/charliesome/better_errors/issues/318
c9 Help
https://community.c9.io/t/white-listing-remote-addr-for-better-errors-gem/4976/4
Use a Rack::Request object to get the IP. You can put the following code in your view.
if Rails.env.development?
request = Rack::Request.new(env)
puts "###### Request IP_ADDRESS = #{request.ip}"
end
Change the last quadrant of the IP you get to 0/24. For example.
BetterErrors::Middleware.allow_ip! '76.168.69.0/24' <--note: change the last quad to 0/24 and of course your ip address will be different than 76.168.69.xx
Yeah!! I got it! Automatically!
Here is my solution:
1- Similar as described by #Grokcodile: edit the project config/environments/development.rb file and add the following line of code to the Rails.application.configure block.
BetterErrors::Middleware.allow_ip! ENV['TRUSTED_IP'] if ENV['TRUSTED_IP']
config.web_console.whitelisted_ips = ENV['TRUSTED_IP']
2- At the Cloud9 edit the ~/.bashrc...
vi ~/.bashrc
add the line (enter, alt+a):
export TRUSTED_IP='0.0.0.0/0.0.0.0'
Save it (esc, :wq)
3- run rails s -b $IP -p $PORT as usual...
4- Enjoy better errors!!
If you also work on this project at a Virtual Machine(vagrant):
1- edit at your VM (vagrant) your ~/.bash_profile (my case) and add:
export TRUSTED_IP=x.x.x.x
export PORT=3000
export IP=0.0.0.0
x.x.x.x must be equal to the REMOTE_ADDR of ENV. (This in not a problem like cloud9 because at my VM the IP doesn't change everytime: 10.0.2.2 always for me).
With this I am now able to use the gem foreman: foreman start at both places with the Procfile:
web: rails s -b $IP -p $PORT
This works because the global env variables are set on both.
I am just starting to learn RoR too, so, hope this is the right thing to do without bringing more problems in the future.
Because Cloud9 is all web-based you don't access it from localhost so by default better errors won't work. If you take a look at the security section of their README (https://github.com/charliesome/better_errors) you can add the following to config/environments/development.rb:
BetterErrors::Middleware.allow_ip! <ipaddress>
So that the errors page shows for your IP. You can find your apparent IP by hitting the old error page's "Show env dump" and looking at "REMOTE_ADDR".
Because I need a Python-enabled gdb, I installed another version via
brew tap homebrew/dupes
brew install gdb
I want to use this gdb with Eclipse CDT, where I entered the path to the binary in the Debugging settings. However, launching a program for debugging fails with the following message:
Error in final launch sequence
Failed to execute MI command:
-exec-run
Error message from debugger back end:
Unable to find Mach task port for process-id 39847: (os/kern) failure (0x5).\n (please check gdb is codesigned - see taskgated(8))
Unable to find Mach task port for process-id 39847: (os/kern) failure (0x5).\n (please check gdb is codesigned - see taskgated(8))
What does "codesigned" mean in this context? How can I get this gdbrunning?
I.1 Codesigning the Debugger
The Darwin Kernel requires the debugger to have special permissions
before it is allowed to control other processes. These permissions are
granted by codesigning the GDB executable. Without these permissions,
the debugger will report error messages such as:
Starting program: /x/y/foo
Unable to find Mach task port for process-id 28885: (os/kern) failure (0x5).
(please check gdb is codesigned - see taskgated(8))
Codesigning requires a certificate. The following procedure explains how to create one:
Start the Keychain Access application (in /Applications/Utilities/Keychain Access.app)
Select the Keychain
Access -> Certificate Assistant -> Create a Certificate... menu
Then:
Choose a name for the new certificate (this procedure will use
"gdb-cert" as an example)
Set "Identity Type" to "Self Signed Root"
Set "Certificate Type" to "Code Signing"
Activate the "Let me override defaults" option
Click several times on "Continue" until the "Specify
a Location For The Certificate" screen appears, then set "Keychain" to "System"
Click on "Continue" until the certificate is created
Finally, in the view, double-click on the new certificate, and set "When using
this certificate" to "Always Trust"
Exit the Keychain Access application and restart the computer (this is unfortunately required)
Once a certificate has been created, the debugger can be codesigned as
follow. In a Terminal, run the following command...
codesign -f -s "gdb-cert" <gnat_install_prefix>/bin/gdb
... where "gdb-cert" should be replaced by the actual certificate name
chosen above, and should be replaced by the
location where you installed GNAT.
source: https://gcc.gnu.org/onlinedocs/gcc-4.8.1/gnat_ugn_unw/Codesigning-the-Debugger.html
UPDATE: High-Sierra (Certificate Assistant - Unknown Error)
https://apple.stackexchange.com/questions/309017/unknown-error-2-147-414-007-on-creating-certificate-with-certificate-assist
Check the trust of the cert, it must be trusted for code signing (on yosemite that is the third last in the trust section of the cert view in the keychain access).
At first the cert was not known for codesigning to the keychain, because there was the Extension purpose "Code Signing" missing, you can find this if you look into the keychain and double click on the certificate:
I fixed that:
Then I added the certificate to the trusted signing certificates, after I had drag&dropped the certificate from the keychain to my desktop, which created the ~/Desktop/gdb-cert.cer:
$ sudo security add-trusted-cert -d -r trustRoot -p codeSign -k /Library/Keychains/System.keychain ~/Desktop/gdb-cert.cer
This was a bit tricky because I was mislead by some internet posts and did not look at the man page. Some said you should use add-trust (https://llvm.org/svn/llvm-project/lldb/trunk/docs/code-signing.txt). The terrible bit was that the command succeeded, but did not do what it "should" (well, it was the wrong command, but it should have told me it was wrong).
After that I found the new cert in the trusted certs like so:
$ security find-identity -p codesigning
Policy: Code Signing
Matching identities
1) E7419032D4..... "Mac Developer: FirstName LastName (K2Q869SWUE)" (CSSMERR_TP_CERT_EXPIRED)
2) ACD43B6... "gdb-cert"
2 identities found
Valid identities only
1) ACD43... "gdb-cert"
1 valid identities found
In my case the apple cert is expired, but the one I was using to sign gdb was not (well, I just created it myself). Also be aware that the policy is named differently for the "security add-trusted-cert"(-p codeSign) and the "security find-identity" command (-p codesigning). I then went on to sign gdb and I also always got:
$ codesign --sign gdb-cert.cer --keychain ~/Library/Keychains/login.keychain `which gdb`
gdb-cert.cer: no identity found
because I was under the impression that I had to give the file name of the cert file to the --sign option, but that in fact was the CN of the certificate that I should have provided and should be in the trust store. You can find the CN here when double clicking on the cert in the keychain:
or in the above output of "security find-identity -p codesigning". Then I went on to sign and I had to give it the right keychain:
codesign -s gdb-cert --keychain /Library/Keychains/System.keychain `which gdb`
I had to enter the root password to allow access to the keychain.
That then gave me a working gdb and it should give you a signed application.
It would seem you need to sign the executable. See these links for more information. You should be able to get away with self signing if you don't plan on redistributing that version of gdb.
https://developer.apple.com/library/mac/#documentation/Security/Conceptual/CodeSigningGuide/Introduction/Introduction.html
https://developer.apple.com/library/mac/#documentation/Darwin/Reference/Manpages/man1/codesign.1.html
Alternatively, you could disable code signing on your system, although this presents a security risk. To do so try running sudo spctl --master-disable in the Terminal.
I made gdb work on OSX 10.9 without codesigning this way (described here):
Install gdb with macports. (may be you can skip it)
sudo nano /System/Library/LaunchDaemons/com.apple.taskgated.plist
change option string from -s to -sp at line 22, col 27.
reboot the computer.
Use gdb
If using gdb isn't a hard requirement you can also use lldb as an alternative. It is already on your system and doesn't need to be code signed:
$ lldb stddev_bugged
(lldb) target create "stddev_bugged"
Current executable set to 'stddev_bugged' (x86_64).
(lldb) b mean_and_var
Breakpoint 1: where = stddev_bugged`mean_and_var + 17 at stddev_bugged.c:17, address = 0x0000000100000b11
(lldb) r
Process 1621 launched: '/Users/richardschneeman/Documents/projects/21stCentury/02/example-00/stddev_bugged' (x86_64)
Process 1621 stopped
* thread #1: tid = 0xc777, 0x0000000100000b11 stddev_bugged`mean_and_var(data=0x00007fff5fbff590) + 17 at stddev_bugged.c:17, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
frame #0: 0x0000000100000b11 stddev_bugged`mean_and_var(data=0x00007fff5fbff590) + 17 at stddev_bugged.c:17
14 typedef struct meanvar {double mean, var;} meanvar;
15
16 meanvar mean_and_var(const double *data){
-> 17 long double avg = 0,
18 avg2 = 0;
19 long double ratio;
20 size_t count= 0;
(lldb)
Here's a table converting gdb to lldb commands http://lldb.llvm.org/lldb-gdb.html
I ended up having to follow these directions instead of the directions suggested by others.
I'm still not sure if it was the act of killall taskgated or the process of enabling root user that made the difference.
Some have said rebooting is necessary. I find that with the above instructions, that may not be the case.
I did also make the change recommended by #klm123, so this may also have contributed.
Note that I use homebrew, not macports.
This is an older question but none of the solutions seemed to work for me (I was using Mojave). Converting to lldb isn't the solution to the question - its just a work around.
After trying several solutions, the one I found to work was located here: https://gist.github.com/gravitylow/fb595186ce6068537a6e9da6d8b5b96d#gistcomment-2891198
Which references this site: https://sourceware.org/gdb/wiki/PermissionsDarwin#Sign_and_entitle_the_gdb_binary
The solution involves a slightly modified version of the code signing. Essentially, the main difference is when signing the certificate, an entitlements XML file must be passed when codesigning. Below I copy/pasted the contents of the sourceware website for all of the steps from beginning to end.
1.1. Create a certificate in the System Keychain
Start Keychain Access application (/Applications/Utilities/Keychain
Access.app)
Open the menu item /Keychain Access/Certificate Assistant/Create a
Certificate...
Choose a name (gdb-cert in the example), set Identity Type to Self
Signed Root, set Certificate Type to Code Signing and select the Let
me override defaults. Click several times on Continue until you get to
the Specify a Location For The Certificate screen, then set Keychain
to System.
💡 If you cannot store the certificate in the System keychain: create
it in the login keychain instead, then export it. You can then import
it into the System keychain.
Finally, quit the Keychain Access application to refresh the
certificate store.
Control: in the terminal type
security find-certificate -c gdb-cert
This should display some details about your newly minted certificate,
e.g.
keychain: "/Library/Keychains/System.keychain" version: 256 class:
0x80001000 attributes:
"alis"="gdb-cert" [...]
Make sure that keychain: is the System keychain, as shown.
Also, make sure that your certificate is not expired yet:
security find-certificate -p -c gdb-cert | openssl x509 -checkend 0
💡If you want to inspect the entire X509 data structure, you can type
security find-certificate -p -c gdb-cert |openssl x509 -noout -text
1.2. Trust the certificate for code signing
Start Keychain Access again. Using the contextual menu for the
certificate, select Get Info, open the Trust item, and set Code
Signing to Always Trust.
Finally, quit the Keychain Access application once more to refresh the
certificate store.
Control: in the terminal type
security dump-trust-settings -d
This should show the gdb-cert certificate (perhaps among others) and
its trust settings, including Code Signing.
1.3. Sign and entitle the gdb binary
(Mac OS X 10.14 and later) Create a gdb-entitlement.xml file containing the following:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>com.apple.security.cs.debugger</key>
<true/>
</dict>
</plist>
If the certificate you generated in the previous section is known as gdb-cert, use:
codesign --entitlements gdb-entitlement.xml -fs gdb-cert $(which gdb)
or before Mojave (10.14), just
codesign -fs gdb-cert $(which gdb)
💡 You may have to prepend this command with sudo if the gdb binary is
located in a place that is not writable by regular users.
If you plan to build gdb frequently, this step can be automated by
passing --enable-codesign=gdb-cert (assuming, again, that gdb-cert is
the name of the certificate) to configure.
Control: in the terminal type
codesign -vv $(which gdb)
And for 10.14 (Mojave) onwards, also check the entitlements:
codesign -d --entitlements - $(which gdb)
1.4. Refresh the system's certificates and code-signing data
The most reliable way is to reboot your system.
A less invasive way is to and restart taskgated service by killing the
current running taskgated process (at any time in the process, but no
later than before trying to run gdb again):
sudo killall taskgated
However, sometimes the taskgated service will not restart successfully
after killing it, so ensure that it is alive after this step by
checking e.g. ps $(pgrep -f taskgated). Or just reboot your system, as
mentioned above.
It's a very old topic, but I am adding a response, because out of many available instructions, only one contained just the right steps to make a self-signed debugger work.
You have to create a self-signed root certificate and then sign the gdb executable with it, but many people complained that it did not work for them. Neither did it for me until I stumbled upon this link.
The key point missing in other manuals is that you have to restart your computer for the changes to take effect. Once I did that, everything worked as intended.
I hope, this will help others.
I followed the instructions with codesigning, but gdb would still give me the same error. It turned out that it did work when gdb is run as root (sudo gdb). I'm using Sierra osx.
I know this is not a direct answer to the question, but I wish someone had mentioned it before I went to the effort of getting gdb to work.
You can build and debug C++ code with Apple's free IDE called Xcode. (Xcode is similar to "Visual Studio" or "Android Studio".). I was already an Xcode user, but I had no idea that it worked with c++ -- because the option is fairly well hidden. This youtube video walks you through it:
https://www.youtube.com/watch?v=-H_EyIqBNDA