XCode 14.2 Another Command PhaseScriptExecution failed error - build

So I'm stuck with the Command PhaseScriptExecution failed error and have been reading a lot of suggestions on possible fixes. I have re-installed PODs and tried a new certificate. My code runs fine, for a while, and then frustratingly fails with the above error code. I quit XCode, re-open and sometimes its works for 10-15 min., other times it fails immediately. It looks like I may be missing a file or folder per my Build log:
PhaseScriptExecution [CP]\ Copy\ XCFrameworks /Users/myname/Library/Developer/Xcode/DerivedData/F1CD-emwkofgjspicwrayhniokxrlpenk/Build/Intermediates.noindex/Pods.build/Debug-iphonesimulator/GoogleAppMeasurement.build/Script-3620B48AE6C009A94EF9C2906A7D6E1C.sh (in target 'GoogleAppMeasurement' from project 'Pods')
cd /Users/myname/Desktop/F1CD/Pods
/bin/sh -c /Users/myname/Library/Developer/Xcode/DerivedData/F1CD-emwkofgjspicwrayhniokxrlpenk/Build/Intermediates.noindex/Pods.build/Debug-iphonesimulator/GoogleAppMeasurement.build/Script-3620B48AE6C009A94EF9C2906A7D6E1C.sh
Selected xcframework slice ios-arm64_i386_x86_64-simulator
rsync --delete -av --filter P .*.?????? --links --filter "- CVS/" --filter "- .svn/" --filter "- .git/" --filter "- .hg/" "/Users/myname/Desktop/F1CD/Pods/GoogleAppMeasurement/Frameworks/GoogleAppMeasurementIdentitySupport.xcframework/ios-arm64_i386_x86_64-simulator/*" "/Users/myname/Library/Developer/Xcode/DerivedData/F1CD-emwkofgjspicwrayhniokxrlpenk/Build/Products/Debug-iphonesimulator/XCFrameworkIntermediates/GoogleAppMeasurement/AdIdSupport"
building file list ... done
sent 238 bytes received 20 bytes 516.00 bytes/sec
total size is 48003 speedup is 186.06
Copied /Users/myname/Desktop/F1CD/Pods/GoogleAppMeasurement/Frameworks/GoogleAppMeasurementIdentitySupport.xcframework/ios-arm64_i386_x86_64-simulator to /Users/myname/Library/Developer/Xcode/DerivedData/F1CD-emwkofgjspicwrayhniokxrlpenk/Build/Products/Debug-iphonesimulator/XCFrameworkIntermediates/GoogleAppMeasurement/AdIdSupport
Selected xcframework slice ios-arm64_i386_x86_64-simulator
rsync --delete -av --filter P .*.?????? --links --filter "- CVS/" --filter "- .svn/" --filter "- .git/" --filter "- .hg/" "/Users/myname/Desktop/F1CD/Pods/GoogleAppMeasurement/Frameworks/GoogleAppMeasurement.xcframework/ios-arm64_i386_x86_64-simulator/*" "/Users/myname/Library/Developer/Xcode/DerivedData/F1CD-emwkofgjspicwrayhniokxrlpenk/Build/Products/Debug-iphonesimulator/XCFrameworkIntermediates/GoogleAppMeasurement/WithoutAdIdSupport"
building file list ... rsync: link_stat "/Users/myname/Desktop/F1CD/Pods/GoogleAppMeasurement/Frameworks/GoogleAppMeasurement.xcframework/ios-arm64_i386_x86_64-simulator//*" failed: No such file or directory (2)
done
sent 29 bytes received 20 bytes 98.00 bytes/sec
total size is 0 speedup is 0.00
rsync error: some files could not be transferred (code 23) at /AppleInternal/Library/BuildRoots/aaefcfd1-5c95-11ed-8734-2e32217d8374/Library/Caches/com.apple.xbs/Sources/rsync/rsync/main.c(996) [sender=2.6.9]
Command PhaseScriptExecution failed with a nonzero exit code
Can someone help with what I can try next?

Related

perf report failing with error "data field size is 0 which is unexpected. Was 'perf record' command properly terminated?"

I ran perf record with following command:
perf record -g -o perf.data.lds "my executables with arguments"
Program ran fine and I can see "perf.data.lds" file generated of non-zero size.
-rw------- 1 sjain medrd 364855672 Feb 18 12:45 perf.data.lds
But when I try to open the perf data using "perf report -i perf.data.lds" command I am getting following error:
WARNING: The perf.data.lds file's data size field is 0 which is unexpected.
Was the 'perf record' command properly terminated?
Any guidance why this might be happening and how I can resolve this problem will be helpful.

gsutil rsync: amount transfered differs from `du`

I'm trying to backup a large directory (several terabytes) to google cloud, using the following command:
gsutil -m rsync -r -e local_dir/ gs://target/bucket
In summary, run in parallel (-m), recursively (-r) search the directory local_dir/ (don't follow symlinks -e), and store it remotely in the bucket gs://target/bucket.
This operation completes succesfully:
[666.4k/666.4k files][ 6.3 TiB / 6.3 TiB] 100% Done
Operation completed over 666.4k objects/6.3 TiB.
One thing that worries me, however, is that the folder size is different when I run du:
$ du --max-depth 1 -h local_dir/
...
7.6T local_dir
Can anyone explain where the discrepancy of more than 1 TiB comes from, compared to what gsutil transfered, and what du reports?
Part of the difference is that Linux du is reporting in units of terabytes (10^12 bytes), while gsutil cp is reporting in units of tebibytes (2^40). Thus, the Linux du units are 1.0995 times larger than the gsutil cp units. Additionally, directories and inodes consume space beyond the file data bytes. For example, if you run these commands:
mkdir tmp
cd tmp
for f in {1..1000};do
touch $f
done
du -h
it reports 24K used, even though each of the files is empty (so, an average of 2.4k bytes per inode). And if you remove the temp files and run du -s on the directory, it consumes 4k bytes. So, your 666.4k files would consume approx 16 MB plus however much for the number of directories that were contained. Also, the amount used may differ depending on the type of filesystem you're using. The numbers I reported above are for an ext4 filesystem running on Debian Linux.

Increasing memory limit for Drush 9 (without changing php.ini file)

I'm trying to run:
drush updb
with drush 9.3.0 on my Drupal 8 site and I'm getting error:
The command could not be executed successfully (returned: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 32 bytes) in /home/myproject/www/staging.myproject.ch/core/lib/Drupal/Core/Routing/CompiledRoute.php on line 163
Tried to run drush like:
php -d memory_limit=1024M vendor/bin/drush ev "echo ini_get('memory_limit')"
and I do get 1024M value, but when I run updb like that I still get previouse memory message.
Here:
https://github.com/drush-ops/drush/issues/3294
..I saw that drush 9 is running some tasks in sub-processes and that most likely it's the case with updb command, so even drush starts with increased memory limit sub-task get's default one.
How can I increase memory limit for drush 9 without changing php.ini file?
And answer to this is:
echo "memory_limit = 512M" > drush.ini
PHPRC=./drush.ini php vendor/bin/drush updb
rm drush.ini
I guess this drush.ini can be "normal" static file, but since all this is required because of un-matching server settings (lack PHP of memory) maybe it should not be part of the project...

Icecast - Too many open files

I'm running an Icecast 2.4.3 server on a CentOS 7. When I have a lot of listeners, I receive these errors and everything stops working:
[2017-06-21 18:56:37] WARN connection/_accept_connection accept() failed with error 24: Too many open files
It's running on "ices" user:
sudo -u ices /opt/icecast/bin/icecast -c /opt/icecast/etc/icecast.xml -b
Or running as root with "changeowner" option to "ices" user.
I set limits.conf:
ices hard nofile 65536
ices soft nofile 65536
From ulimit:
[root#orfeu inweb]# su ices
[ices#orfeu inweb]$ ulimit -n
65536
But when I check the PID, I get:
tcp 0 0 <IP>:8000 0.0.0.0:* LISTEN 21650/icecast
[root#orfeu inweb]# cat /proc/21650/limits
Limit Soft Limit Hard Limit Units
...
Max open files 1024 4096 files
...
How can I fix this, to enforce 65536 file descriptors? Thank you.
Probably I found a solution. Need to check when the problem happens again.
I realize that limits.conf set the limits per user.
I found out a way to set the limits per process.
/usr/bin/prlimit -n30000 -p `cat /var/run/icecast.pid`
Now I have:
Max open files 30000 30000 files
I don't know if there's a way to always start "icecast" binary with these limits, or always need to run the command against the PID after run.

C++ REST SDK Compilation Error (Centos 6) [duplicate]

When I deploy Apache Mesos on Ubuntu12.04, I follow the official document, in step "make -j 8" I'm getting this error in the console:
g++: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-4.9/README.Bugs> for instructions.
make[2]: *** [slave/containerizer/mesos/libmesos_no_3rdparty_la-containerizer.lo] Error 1
make[2]: *** Waiting for unfinished jobs....
mv -f log/.deps/liblog_la-log.Tpo log/.deps/liblog_la-log.Plo
mv -f slave/containerizer/.deps/libmesos_no_3rdparty_la-docker.Tpo slave/containerizer/.deps/libmesos_no_3rdparty_la-docker.Plo
mv -f log/.deps/liblog_la-consensus.Tpo log/.deps/liblog_la-consensus.Plo
mv -f slave/containerizer/.deps/libmesos_no_3rdparty_la-external_containerizer.Tpo slave/containerizer/.deps/libmesos_no_3rdparty_la-external_containerizer.Plo
mv -f log/.deps/liblog_la-coordinator.Tpo log/.deps/liblog_la-coordinator.Plo
mv -f slave/.deps/libmesos_no_3rdparty_la-slave.Tpo slave/.deps/libmesos_no_3rdparty_la-slave.Plo
mv -f master/.deps/libmesos_no_3rdparty_la-master.Tpo master/.deps/libmesos_no_3rdparty_la-master.Plo
make[2]: Leaving directory `/root/Mesos/mesos/build/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/root/Mesos/mesos/build/src'
make: *** [all-recursive] Error 1
what should I do?
Try running (just after the failure) dmesg.
Do you see a line like this?
Out of memory: Kill process 23747 (cc1plus) score 15 or sacrifice child
Killed process 23747, UID 2243, (cc1plus) total-vm:214456kB, anon-rss:178936kB, file-rss:5908kB
Most likely that is your problem. Running make -j 8 runs lots of process which use more memory. The problem above occurs when your system runs out of memory. In this case rather than the whole system falling over, the operating systems runs a process to score each process on the system. The one that scores the highest gets killed by the operating system to free up memory. If the process that is killed is cc1plus, gcc (perhaps incorrectly) interprets this as the process crashing and hence assumes that it must be a compiler bug. But it isn't really, the problem is the OS killed cc1plus, rather than it crashed.
If this is the case, you are running out of memory. So run perhaps make -j 4 instead. This will mean fewer parallel jobs and will mean the compilation will take longer but hopefully will not exhaust your system memory.
(Might be a memory issue)
For anybody still struggling with this (over 2 years after the question was asked), there's this trick on CryptoCurrencyTalk that seems to make it work.
For convenience I'm pasting it here:
Run these (adjust bs= and count= to the amount of swap you want)
sudo dd if=/dev/zero of=/swapfile bs=64M count=16
sudo mkswap /swapfile
sudo swapon /swapfile
That should let you compile your code. But make sure you then revert the swapon after compilation, with these:
sudo swapoff /swapfile
sudo rm /swapfile
check if your CentOS installation already has swap enabled by typing:
sudo swapon --show
If the output is empty, it means that your system does not have swap space enabled.
Create a swap file
1.create a file which will be used as a swap space. bs is the size of one block. count is num of blocks. it will get 1024K * 1M = 1G space.
sudo dd if=/dev/zero of=/swapfile bs=1024 count=1048576
2.Ensure that only the root user can read and write the swap file:
sudo chmod 600 /swapfile
3.set up a Linux swap area on the file
sudo mkswap /swapfile
4.activate the swap
sudo swapon /swapfile
5."sudo swapon --show" or "sudo free -h" you will see the swap space.
This was the clue in my scenario (compiling mesos on CentOS 7) on an AWS EC2 instance.
I fixed it by increasing memory and cpu to at least 4GiB and 2 vCPUs.