Cleanup flatpak repo folder? - flatpak

After trying to build the gitg flatpak I noticed my /var/lib/flatpak/repo folder has become very large.
I'm assuming these are build files? Is there a good way to clean these up?
I'm using Flatpak 1.4.0.

For those landing here who arent building stuff, /var/lib/flatpak/repo is also where every flatpak installs ends up, and when you run upgrades it doesn't clean itself up. For --user installed packages it would be ~/.local/share/flatpak/.
Discovered that answer on this post.
flatpak uninstall --unused
Before
[root#laptop flatpak]# du -sh .
8.4G .
After
[root#laptop flatpak]# du -sh .
4.3G .

/var/lib/flatpak/ & ~/.local/share/flatpak/ are the system & user install location for installed runtimes (e.g. org.gnome.Platform) and applications (e.g. org.gnome.gitg). The repo/ directory is where all the checksum-ed binary files are stored, it's like a git repo. They are not the build files unless you include the downloaded org.gnome.Sdk//master runtime, which would be installed here. But the SDK is shared and not specific to gitg.
If you built with flatpak-builder they would be in a folder called .flatpak-builder & the build folder (what ever you called it). So if you ran the following in a directory like ~/gitg-build-folder/:
flatpak-builder --force-clean --repo=gitg-repo build org.gnome.gitgDevel.json
Delete ~/gitg-build-folder/build & ~/gitg-build-folder/.flatpak-builder to remove any build files produced during building gitg.
If you don't need to build anything in the future you could delete org.gnome.Sdk//master, however, a lot of the files are de-duplicated as org.gnome.Platform is also installed. You might also have the *.Debug SDK extension installed which would take a lot of space.
Answer from duplicate question on Flatpak GitHub:
https://github.com/flatpak/flatpak/issues/2945#issuecomment-499498706

Fixed my greedy Flatpak problem, for what it's worth: I managed to clean like 20 Gb of garbage (/var/lib/flatpak/repo/objects), a bunch of tiny files. I started by uninstalling all the applications that I had installed there, but it didn't make much difference.
Without applications and only with runtimes, it was still the same. Used the flatpak uninstall --unused command, which removes runtimes and extensions not used by installed applications (I didn't have any left, so everything was removed). Despite this, there was no big difference on the hard drive.
Finally, the command sudo flatpak repair, which is to fix inconsistencies, is what cleared almost 20 GB.
I had previously tried it without success. I guess by deleting the apps, Flatpak just became aware of that garbage.
Although I don't need them anymore, because I installed them directly on the system, I reinstalled the Flatpak applications (curious to see what would happen) that I had and everything works fine and only taking up something like 1GB.
My Flatpak version: 1.10.7

Related

macOS Catalina: trying to install content to the system volume

I have apps that I distribute as .pkg files created using pkgbuild and productbuild. With macOS Catalina, this doesn't work any more. The installer complains that I'm trying to install content to the system volume.
I posted three weeks ago thinking the error had to do with bundling a Java runtime. It turns out it has nothing to do with Java.
To test it I have the smallest possible project called Hello with a main window and a button to click. In XCode, I do Product -> Archive, then Distribute App, and Copy App. This creates a directory Hello 2019-12-18 15-01-07 with contents Hello.app. The app works fine. I then
pkgbuild --root *7 Hello.pkg
which creates Hello.pkg.
When I double-click Hello.pkg in the finder the installer presents me with screens for Introduction, Destination Select (only one option is offered), and Installation type ("Standard Install on Macintosh HD"), then asks me for my password. It then says, "This package is incompatible with this version of macOS. The package is trying to install content to the system volume. Contact the software manufacturer for assistance."
It makes no difference if I codesign and notarize. Productbuild only adds one more layer to the failing process.
What am I missing?
We could fix the issue by using the option
--install-location
of the pkgbuild command.
If the
--install-location
option is not used, pkgbuild uses / as the default install location in many cases.
In macOS Catalina, only certain folders are writable. Refer this link for more details.
In our case, the package installation succeeded only when we specified one of the writable folders such as
/usr/local
/opt
/Applications
as the default install location.

Fixing MinGW Installation on Windows 8

While helping my friend spin up MinGW and a C++ environment on his Windows 8 computer, I ran the get-mingw script and waited as it ran through all the mirrors for required downloads. However, three downloads completely failed:
libltdl - installer script hung and then moved on after pressing "OK"
automake-1.11 - installer script tried finding 1.10, then 1.9, then 1.8, then 1.7 (all of which failed) until finally settling on 1.6
mktemp - script hung and moved on after pressing "OK"
In all three cases, the script gave me a nice error log upon completion, showing that a majority of packages had been downloaded and installed except for these three, which showed up as errors. However during the installation process I had simply gone to the MinGW sourceforge page and manually found and downloaded each .bin.tar.lzma file that was missing.
Now that I have them, is there a good accepted way to unpack and plug them into my friend's existing MinGW install? In case it's tough, I'm comfortable with unix and dos command line so I'll be able to move executables into the MinGW/bin folder if that's what's needed, I just want to check for the best way to 'fix' the install.
As a side note - even though the error log says these are required packages, adding MinGW/bin/ to the PATH still allows for use of gcc and g++, although not make (possibly because of automake failure?). Is this standard behavior?
Firstly, the package issue can be fixed by using the MinGW installer - keep the packages selected and go to "apply changes" and the script will probably try to redownload the missing packages. I think the original problem was probably just a shoddy wifi connection during repository connection.
However, I then ran into a problem where I tried to run gcc and it gave me a missing -lpthread error ... but this question was able to help me fix that, and gcc and g++ are working fine now (haven't opened and tested Eclipse yet though). Just in case of link decay, the issue I cited arises from the MinGW installer script not downloading the lpthread library upon installation. To fix that issue, quoted from link:
Just run and open MinGW Installation Manager, which should be pre-installed with MinGW, select "All Packages" on the left panel, and on the right panel, search for "mingw32-pthreads-w32" packages and install them.
I think the Installation Manager has libpthread and pthread available for install, and pthread libs were the ones that seemed to solve it for me.

pyinstaller 3.2 build pyqt4/python2.7 to onefile exe, can not run missing msvcr100.dll?

As title,
Build successful, but the exe can't run. can not found msvcr100.dll.
I can put msvcr100.dll with exe in the same dir, the exe can run.
But I just want only one exe file.
Anyone know how to do?
Has solved. This is a bug of pyinstaller3.2, the new in the git has solved this bug. Down the newest source in the github, erverything works fine.
Has solved. This is a bug of pyinstaller3.2, the new one in the git has solved this bug. Down the newest source in the GitHub, everything works fine.
This is correct, I cant tell you how much that answer helped me out. I have been trying to build a single exe Exploit to execute on Windows XP with-out it crashing for my OSCP Labs/Exam. I followed so many tutorials and nothing seems to work. I was able to build the EXE but could not get it to run under a single EXE.
If anyone who reads this is getting "This Program cannot be run in DOS mode" try running it from another machine with the same build (Windows XP). There is not much info out there on how to solve that from a Reverse Shell on a End Of Life Operating System using an EXE exploit built with Pyinstaller. (Lots of Trial and Error and determination)
Microsoft Visual C++ 2008 Redistributable Package (or some other version depending on python version) is needed in any case, python27.dll requires it
I was also receiving an error about msvcr100.dll when ran from the GUI on my build machine(WinXP SP2). This is corrected in the 3.3 Dev version on GitHub.
I installed the C++ 2008 Package but this didn't solve my problem when I re-built the EXE, the 3.3 Dev Pyinstaller was the solution.
What I did was:
Zip down the Dev version of Pyinstaller 3.3 Dev(GitHub) is the newest for 11/14/16 that I could tell. Make sure you have Python 2.7.x (I used 2.7.11) and pywin32 installed that matches (Python 2.7.x) version. (And it does matter if its 64-bit or 32-bit) Use the setup.py to install Pyinstaller, make sure you do not have a previous version already installed, if so use pip or etc. to remove. I installed with pip first and this was my whole issue.
I was able to get all of my 32-bit Single EXE Exploits to run on 64-bit/32-bit Windows machines up to Windows 10.
Once that is completed, make sure Pyinstaller is in your $PATH and follow the standard tutorials on creating a --onefile EXE. Copy to your Windows Target machine and it should work with-out error. I did not need to pull any dependencies over but you may have to include some with the --hidden command. Its greatly detailed in the Pyinstaller documentation on how to include hidden .dlls
If this still doesn't work for you try using py2exe. Its a little more complicated but it your determined you will figure it out.
If you have code written in python 2.x.x and 3.x.x you can have multiple environments of Python and have Pyinstaller installed in each. This is in the documentation as well.
Thank you jim ying. Your 2 sentence answer was exactly what I needed.

dicom3tools compiles with missing application pbmtoovl

I've downloaded dicom3tools in Ubuntu apt-get install dicom3tools, but certain apps are not present.
I've downloaded the source and compiled according to directions on Ubuntu without errors. I have access to most of the apps in the kit, but some just seem to be missing or not compiling.
I need a working binary copy of the pbmtoovl tool from this kit.
Can anyone help me?
Do you know why it is missing?
Do I need to compile differently?
Do you have a copy of the pbmtoovl app pre-compiled?
There is no info on this anywhere on the web, I have nowhere else to turn.
Thanks in advance for any info on this.
Please please help me with this.....
I edited the proper file with a uid.
I ran
imake -I./config -DInstallInTopDir -DUsemyID
and everything looked fine.
make World.
make install
make install.man,
but still no rawtodc or pbmtoovl or any of the dicom creation tools. I really need these tools. Please let me know what I'm doing wrong. On Ubuntu 14 –
I am the author of the dicom3tools debian package. The explanation is given online here.
When you install a debian package, you are required to read the documentation. In this case the documentation was available on your system from:
$ cat /usr/share/doc/dicom3tools/README.Debian
So you'll need to follow the build instructions yourself (see INSTALL):
Edit config/site.p-def to set your UID root (a la UseClunieID, to be
selected with a UseXXXXID define on the imake command line).
NB. Don't ever use any UseClunie*ID or your instances
will conflict with mine !
./Configure
setenv IMAKEINCLUDE -I./config # only needed for suns
imake -I./config -DInstallInTopDir -DUseXXXXID
make World
make install # into ./bin
make install.man # into ./man
I finally did a fresh Ubuntu install, installed xutils, g++, gcc and ran the compiling instructions. It did not install, again, but this time I did have a new directory in bin ending in 'unknown' that miraculously contained all of the compiled binaries. I added that dir to the PATH and VOILA I can access all the tools from the command line....
It's still a problem, but I can now use pbmtoovl

G++/GCC: how to make your app tell OS to download libs it needs into system?

So for example I am creating some app that uses boost or openCV and on my developer machine all that is installed so app compiles without any problem. But I wonder how to make app tell OS to download libs I use on first run? Is it possible? (sorry - I am linux noob)
This is what package managers are for. What you do is you compile your project, and then you build a package (e.g. .deb or .rpm), using the appropriate tools. While doing so, you can specify where the various files in your package should go, but also which other packages your package relies on. These are known as "dependencies", and package managers like apt and rpm are pretty good at resolving them.
Here's the official debian guide to making packages to give you an idea:
http://www.debian.org/doc/maint-guide/
Alternatively, you can just distribute your program as-is and list the dependencies in the install instructions; users will then have to manually install them through their package manager before running your program.