Composer - copy vendor files from dev to production is OK? - amazon-web-services

I'm using composer for AWS SDK.
Is it OK to install the SDK on a dev pc (windows) and copy the vendor directory to production or must I install composer on the production server (linux) and get the libraries installed there by composer?

Moving the whole project around to another path or even different computer should work fine, given that all paths generated for autoloading etc are relative.

Related

Should I move windows project folder into WSL?

I'm trying to set up a work environment on a new machine and I am a bit confused how best to procede.
I've set up a new windows machine and have WSL2 set-up; I plan on using that with VS Code for my development environment.
I have a previous django project that I want to continue working on stored in a folder in a thumb drive.
Do I move the [windows] project folder into the linux folder system and everything is magically ready to go?
Will my previous virtual environment in the existing folder still work or do I need to start a new one?
Is it better to just start a new folder via linux terminal and pull the project from github?
I haven't installed pip, python, or django on the windows OR linux side just yet either.
Any other things to look out for while setting this up would be really appreciated. I'm trying to avoid headaches later by getting it all set-up correctly now!
I would pull it from github, and make sure you have the correct settings for line endings, since they are different between windows and linux. Just let git manage these though:
https://docs.github.com/en/get-started/getting-started-with-git/configuring-git-to-handle-line-endings
Some other suggestions:
Use a version manager in linux to manage your python versions - something like pyenv or asdf. It will make life easier.
Make sure to always create a virtual environment for everything and don't pip install anything in your main python. (I use direnv for virtual env management)
The single exception to the previous suggestion is pipx, which I do install in the main python and then use to install things like cli tools, black, isort, pip-tools etc.
Configure VScode to use the pipx installed versions of black, flake8 etc. for linting purposes.
If you're using Docker, enable the WSL integration for your WSL flavour (probably Ubuntu). Note that docker desktop needs starting before your WSL session.

Debugging a Qt app by Qt Creator with a Docker development environment

Background
I was using a laptop with openSUSE Leap 15.1 to develop a Qt app. I upgraded to openSUSE Tumbleweed. Now I realize that library versions which my app is dependent upon are not available for Tumbleweed. Now I have these options:
Reinstall openSUSE Leap 15.1 (or maybe 15.2?)
Set up a development environment with some Docker images
Set up a development environment with a virtual machine
Unavailable dependencies: grab their binary packages directly and install them manually on openSUSE Tumbleweed
...?
Question
About 2nd option i.e. Docker.
It's known how to use Docker to deploy the app. You set up the development container with all the dependencies and run some deployment scripts with it.
However, I don't know:
Is it possible to set up Docker containers in a way that Qt Creator debugger can be used for development? If I use Docker, would I be able to step through the code with Qt Creator debugger?
Is this scenario possible:
Pull an openSUSE Leap 15.1 Docker image
Set up a bindmount volume that links the /usr/lib64/ directory from inside the container to the ~/leaplib directory on the host machine. It means ~/leaplib:/usr/lib64/
Do the same for development headers i.e. ~/leapinclude:/usr/include/
Bindmount procedure is explained here
Install all the Qt project dependencies on the openSUSE Leap 15.1 container
Therefore, all dependency libraries and header files would be installed inside the container bindmount volumes
Inside Qt Creator project on the host machine, add ~/leaplib to library path
Inside Qt Creator project on the host machine, add ~/leapinclude to include path
The Qt project source code repository is of course on the host machine
Use Qt Creator to open project repository source code
You should be able to develop and debug the code with Qt Creator debugger, right?
The above plan is not tested yet. Not sure if it would work. Any idea? Am I missing something?
Another scenario:
Make use of docker-compose and Dockerfile suggested by #DavidMaze
Create a docker-compose.yml file defining custom bindmount volumes to be able to share data between container and the host
Create a Dockerfile starting with FROM opensuse/leap:15.1
Install all the dependency packages inside Dockerfile with zypper --root /usr/local/
Needed container data would be inside /usr/local/lib64/, /usr/local/lib/ and /usr/local/include/
Share needed container data with the host by copying data to custom bindmount volumes defined inside docker-compose.yml file
Add bindmount volumes to Qt Creator library path and include path
Use Qt Creator to debug the source code in the host machine
Is something missed?

Google Cloud SDK installation fails on Windows 10

When I tried to install Google Cloud SDK, it fails:
Welcome to the Google Cloud SDK!
To use the Google Cloud SDK, you must have Python installed and on your PATH.
As an alternative, you may also set the CLOUDSDK_PYTHON environment variable
to the location of your Python executable.
Google Cloud SDK installer will now exit.
Press any key to continue . . .
I installed Python and copied system32 path in system variable and environment variable even, but still fails. What's going wrong?
I had the same problem some time ago, this is how I solved it:
uninstall cloud sdk (delete also the folder), uninstall python
reboot you system
launch the installer and select "install bundled python"
when the installer asks for an installation path, point to "C:\Users\YOUR_USER\AppData\Roaming\gcloud"
I had a problem with my windows installation since I had different permissions set on the default path suggested which was "Program Files (x86)".
Starting fresh + changing path fixed the issue for me :)
also review this page, to see if everything is in check for you

Blender on IBM Cloud (Cloud Foundry)

I'm currently developing a web application (Django 2.0) application.
My app will be deployed on IBM Cloud (Cloud Foundry) using python build-pack.
One of my requirements is to install blender.
Everything else is very well, but for blender installation.
What I've tried so far was:
I tried access my app using SSH connection, but surely I don't have root access to apt-get install blender!!
And tried to include blender in packages.json file and push that file using cf push my-app.
But nothing worked for me.
In another shorter question: what is the main approach in Cloud Foundry Apps to install packages like when we use apt-get install in Ubuntu / Debian.
Please correct me if I did anything wrong, or guide me with headlines to solve this problem!!
I see a couple options for you to install packages if they cannot be installed using the regular requirements file (which is the preferred way):
Download the relevant libraries and put them in subfolders of the app before pushing it. The libraries will be uploaded. That is how I would do it.
Once you have an SSH connection, use secure copy (scp) to upload the files and place them in the subfolders where they are expected.
Regarding Blender, the question is what you need in addition to having the code copied over. Does it need a running daemon? Are there more dependencies? You would need to share more information about your specific app to answer that. Maybe, packaging everything as one or more containers and run it on Kubernetes or a combination of Cloud Foundry and Kubernetes is a better way.

Use package manager in a Cloud Foundry instance

Can I use apt-get or other package managers in Cloud Foundry buildpacks or .profile scripts that come with apps; and if I can, how to do it? I expect to do it the same way as in a dockerfile, but it doesn't work with or without sudo in my case.
Can I use apt-get or other package managers in Cloud Foundry buildpacks or .profile scripts that come with apps; and if I can, how to do it?
No. Running apt-get or a package manager would typically require root access and you do not get root access when the build pack runs or when your application runs (this is a difference w/Docker).
That said, you can do anything that doesn't require root access, so if you found a package manager that installed in the vcap user's home directory and didn't need root then you could use that.
It depends on what you're trying to install, but in some cases you can work around this by downloading the .deb or .rpm file and manually extracting the binaries. This typically works OK for things like shared libraries. Just download the precompiled binary that matches your stack (cflinuxfs2 == Ubuntu Trusty). For other things, you can build your own binaries from source. This is what the build pack's do, see binary-builder.
Hope that helps!