I would like to add some Core dump configuration to an Opensips docker container. At present, I am only generating a coredump file with the name "core" (which could be overwritten).
The changes I need to make are:
echo 1 > /proc/sys/kernel/core_uses_pid --> which generates "core.6645" for example
and
echo 'core.t.sigp' > /proc/sys/kernel/core_pattern --> This will have the core file contain the process name ( % e ), the timestamp ( % t ), the received signal ( % s ) and the pid file ( % p )
The problem is, the /proc/sys/kernel/ filesystem is Read-Only, so both attempts at editing files gets this error when building the image (I added the above commands to the Dockerfile):
/proc/sys/kernel/core_uses_pid': Read-only file system
Any ideas on how I can get around this issue?
To solve this, you need create your container with --privileged:
ex.:
docker run --privileged -i --name master --hostname k8s-master -d ubuntu:20.04
Related
On the docker image debian:stretch-slim, couldn't delete a specific folder on a NFS drive, using rm -rf /folder-name as root (or rm-rf * after entering the folder-name).
Got the following error back:
rm: cannot remove 'test-ikmgfjhv/dev/.nfse47cf31c6b1dd52500000009': Device or resource busy
After a lot of searching, eventually got to the following link:
https://uisapp2.iu.edu/confluence-prd/pages/viewpage.action?pageId=123962105
Which Describes exactly why those files exist in NFS and how to handle them.
As I wasn't using the same machine the process runs on (another container), so in my case, I had to work around that and first make sure the process using the file is being killed on the first machine, then try to delete it on the second one, according to the project's needs.
It is possible that the .nfs file is attached to a process that is busy or running (like an open file, for example, a vim file).
For example, if the hidden file is .nfs000000000189806400000085, run this command to get the pid:
lsof .nfs000000000189806400000085
this will output the PID and other info related to that file
then kill the process:
kill - 9
Be aware that if the file was not saved you will lose the information.
While running any command if you get error like :
/home/mmandi/testcases/.nfs000000e75853 :device or resource busy.
Go to the directory where this file is being shown.
For e.g - In this case : /home/mmandi/testcases/
Do following :
# ls -la : This will display contents of the directory along with files starting with "."
Here it displays the .nfs000000e7585 file.
# lsof .nfs000000e7585
This will list down the PID
# Use Kill -9 PID.
I'm developing a C++ library that has a piece of shell script code that return the name of a specific serial port. When I run this script in console either X64 desktop or Arm enviorment the script returns the right answer. My problem ocur when I execute the same script inside of the library, the returns shows bad formed string like ÈÛT¶ÈÛT¶¨a , but the expected is /dev/ttyACM0.
The script that run inside of library:
Script
bash -c 'for sysdevpath in $(find /sys/bus/usb/devices/usb*/ -name dev);do(syspath="${sysdevpath%/dev}";devname="$(udevadm info -q name -p $syspath)";[[ "$devname" == "bus/"* ]]&& continue;teste="$(udevadm info -q property --export -p $syspath | grep -i "company_name")";if [[ ! -z "${teste// }" && $devname == *"ttyACM"* ]]; then echo "/dev/$devname";fi);done;' 2> /dev/null
The following piece of code is used to save the content returned by the script into a file.
code c++
pfFile = fopen(CONFIG_FILE, "w+");
fwrite(result,strlen(result), 1, pfFile);
fclose(pfFile);
return 0;
Besides you didn't include what is result and where it comes from in your C++ code; you selected the hardest way to do this. Code running shell scripts inside a library most likely cause nothing but headaches.
Basically you can create an udev rule for your device to create an unique and stable file in /dev to access it. You can create one like this one in the ArchWiki
KERNEL=="video[0-9]*", SUBSYSTEM=="video4linux", SUBSYSTEMS=="usb", ATTRS{idVendor}=="05a9", ATTRS{idProduct}=="4519", SYMLINK+="video-cam1"
I'm using AWS Codedeploy to deploy my code from GitHub to AWS EC2 instance(Windows 2008 server). Deployment fails in DownloadBundle event
Error stack in logs of AWS :
No such file or directory - C:\ProgramData/Amazon/CodeDeploy/4fbb84fd-caa5-4d1a-9894-16b25abcea76/d-QUPXMDBCF/deployment-archive-temp/My-Application-163e9d3343be82038fe2e5c58a9fcae86683d4ea/src/main/java/com/myapp/dewa/customexceptions/EventNotPublishedException.java
The problem here might be with the file path limit of windows.
UPDATE: AWS CodeDeploy Support team has confirmed that this is a limitation from their side. More than half of the file path is being used by CodeDeploy because of which limit is being exceeded
Have you replaced some strings from the file_path and/or file_name?
This error you get when the total length of the file_path is beyond 260 characters. This length includes one null character at the end for termination. Your total length is 239+1 = 240.
For reference, please see this article: https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx#maxpath
If you check the path in the destination, you should not see the file because it was not copied but it is in your revision zip file.
In my case, the total length was 266. It may not be possible to shorten the strings of the actual file path in the revision since lots of them are created by the developer tools. Amazon is investing at their end now to see how to overcome this.
You can test and confirm by doing the following:
Run the following command in the command prompt to create the deployment archive folder:
mkdir "c:\ProgramDat0/Amazon/CodeDeploy/4fbb84fd-caa5-4d1a-9894-16b25abcea76/d-QUPXMDBCF/deployment-archive-temp"
Simply try to extract your revision zip file directly under 'deployment-archive-temp' folder.
You should received the following error for file crossing the maximum path length of 260:
'Error 0x80010135: Path too long'
Ref: https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx#maxpath
I hope this helps.
While not a complete solution, I've experienced the same problem and we were able to remove the preceding 'ProgramData\Amazon\CodeDeploy' to save 29 characters if you can stand the mess in your root folder.
To do this we modified the conf.yml file located in c:\programdata\amazon\codedeploy\
I changed ... root_dir: 'Amazon\CodeDeploy' ... to ... root_dir: 'C:\'
If you are using Windows 2016, setting the value to 1 for the following registry entry will fix the issue with long paths.
HKLM:SYSTEM\CurrentControlSet\Control\FileSystem
Referencing iskandar's post this can be done through a powershell script if you wish to automate it in something like a startup script.
# #see https://github.com/aws/aws-codedeploy-agent/issues/46
# #see https://learn.microsoft.com/en-us/windows/desktop/FileIO/naming-a-file#paths
Write-Verbose "----> Enabling Long Path Support"
$RegistryPath = "HKLM:SYSTEM\CurrentControlSet\Control\FileSystem"
$Name = "LongPathsEnabled"
New-ItemProperty -Path $RegistryPath -Name $Name -Value 1 -PropertyType DWORD -Force | Out-Null
# You'll want to reboot to make sure; this is Windows we're working with.
Restart-Computer
You can also use the GUI method outlined in this post.
Note - either method will definitely require a restart for the setting to take affect
I've created a simple service using automator, that takes a *.tiff, creates a *.jpg out of it and than deletes the original.
However, I run this on a *.tiff file, it keeps on running, meaning it keeps on converting the (then jpg) file over and over again. That is, I believe it does, since the file disappears and reappears about 2 times a minute and the timestamp changes. How do I tell it to run the service (i.e. the shell commands) just once?
The Service in Automator is just this one action of type "run Shell-Script". The Shell script is
newName=${#%.tiff}.jpg
echo "$newName"
sips -s format jpeg "$#" --out "${newName}"
rm "$#"
Thanks!
(Would have posted a picture of the Automator window, but was not allowed to)
This behavior seems to be a folder actions.
if you created a folder action :
1- You must filter the TIFF files for that folder action doesn't process the created JPEG file.
2- You must use a loop, if you drop one or more files in that folder.
3- Use "&&" to delete the TIFF file only when the sips command finishes successfully.
Here's the script:
for f in "$#";do
if [[ "$f" = *.tiff ]]; then
sips -s format jpeg "$f" --out "${f%.tiff}.jpg" && rm "$f"
fi
done
I am assuming cron keep creating random empty files in my root directory with name users-recall?type=cron.*** - * (some random numbers could be with dots like 111.01). And it has only 2 jobs in it.
*/1 * * * * cd /var/www/html/*** && php script.php > /dev/null 2>&1
0 */1 * * * wget -q -t 1 https://domain/users-recall?type=cron > /dev/null 2>&1
I tried to search it but couldn't find it. It creates it daily or so. Not creating it every minute. I am not sure what else could create this files. I have just AWS EC2 Linux server with nothing additional installed except standart tools.
Sending the STDOUT/STDERR output of wget to /dev/null is not sufficient if you don't want a file to be created.
Modify your command by adding -O -.
... type=cron -O - > /dev/null 2>&1
The -O option (uppercase letter O) is the short form for --output-document, and the next - means to use STDOUT instead of creating a file on disk from the response. By default, wget creates a file, and appends a counter to the end if the file already exists. Since you are discarding STDOUT with > /dev/null, this will do what you intend -- make a web request, and discard the output.
these files are getting created by ur second cronjob for sure. You can comment that job and see that those files will stop getting created.
Can you try enclosing the URL in single quote?
0 */1 * * * wget -q -t 1 'https://domain/users-recall?type=cron' > /dev/null 2>&1
The character ? may get interpreted by shell depending on the SHELL variable of the cronfile.